ANNOUNCEMENT: Live Wireshark University & Allegro Packets online APAC Wireshark Training Session
April 17th, 2024 | 14:30-16:00 SGT (UTC+8) | Online

Wireshark-dev: Re: [Wireshark-dev] performing cpu/time intensive computation in a protocol diss

From: Andrew Hood <ajhood@xxxxxxxxx>
Date: Thu, 07 Aug 2008 09:26:02 +1000
FYI, I've read Richard's reply.

Luis EG Ontanon wrote:
> Insecurity people panic... security people take action...

Possibly a poor choice of words. You can't have dealt with the way a
large organisation reacts to stress. Panic preceeds action because panic
is easy and action is not. The ones who panic tend to be the managers of
the people who can take action.

> Security people that ban a program that finds/exploits a hole are not
> security people... security people makes sure a well known a very
> impacting vulnerabiliy is taken away.

That statement would be less troublesome if it did not include the word
"exploits".

People testing for potential exploits cause me grief every few months.
They are following the directives and prodecures lain down from above by
people who don't understand all the complexities of the business but
have to satisfy the auditors.

> I think that letting users to know that e.g. their Bank's website SSL
> key is broken is a good thing, they will avoid using and start
> complaining (As I did, now my bank uses a secure key, haven't I proven
> the key I might have been using for longer).

I repeat my earlier observation. There are enough key testers out there
to find the broken ones. Wireshark does not need this capability.

1) how many even modest size banks generate their SSL keys with Debian?
2) how many banks will not have heard of the vulnerability?
3) how many banks would wear the potential exposure of not fixing any
compromised keys?

I have experience of working with a large bank. They take their security
seriously. It makes our jobs much more complex and costs a significant
amount, but it is the customer's money and the bank's business they are
protecting.

> The doctrine of not making people aware of vulnerabilities is a
> botched one. The point is: Bad people will know about the
> vulnerability. Having good people not knowing, makes them unable to
> take action so the result is vulnerability.
> 
> It's wrong to blame who finds a problem for it...

That's not the issue. Reporting a problem is correct procedure and you
may not be in a position to fix it. Testing and change procedures have
to be followed. If the problem is so serious that you can't wait to get
the changes in, then you take the system offline.

In particular, you have to be prepared to explain why you found a
problem which is outside the bounds of your responsibility. "Why were
you looking for this problem? You must have been looking for an
exploit." Poking your nose in where it does not belong is not a good
career move.

Having a tool which could be perceived as being able to take advantage
of a problem is not acceptable. It stands a fair chance of getting you
fired and possibly prosecuted.

I'm not going near the network with such a tool without a signed,
notarised authority from the head of internal security, with a copy
approved and lodged with my solicitors.

> On Wed, Aug 6, 2008 at 1:49 PM, Andrew Hood <ajhood@xxxxxxxxx> wrote:
> 
>>Sake Blok wrote:
>>
>>
>>>May I have your votes please? ;-)
>>>
>>>1) Don't include the code at all
>>
>>There are enough weak key identifiers out there without burdening
>>Wireshark with a CPU intensive test for a one off problem. The next time
>>someone finds a weakness it is bound to be a different problem needing
>>different discovery.
>>
>>I don't want to have anyone in our networks using a version of Wireshark
>>with the ability to crack keys. It will panic the security people and
>>they will ban Wireshark totally. Wireshark it too useful to let that happen.

-- 
There's no point in being grown up if you can't be childish sometimes.
                -- Dr. Who