• JL (unregistered) in reply to Dragnslcr

    Dragnslcr:
    To be fair, it seems like this system is only for internal use. If that is the case, they shouldn't have to worry about someone packet sniffing the password

    Your suggestion might be valid in a very small company, but it's clearly a bad idea here.  As the article says, they already have a security system that limits which users can get into which applications and what data they can see. Presumably these limits exist for a reason -- perhaps legal or regulatory -- and the web service completely circumvents them.

    ...if someone who shouldn't be inside the network is inside the network, they have other problems.
    By similar reasoning: "Why have computer security at all?  If there's someone out there with malicious intent, you have other problems."  Blindly trusting everyone inside the network is ill-advised because it sets exactly one (low) hurdle between an attacker and the data.  Ideally, you want multiple hurdles ("defense in depth"), so that the failure of one security measure is not a catastrophe.  In this case, all it takes is one guy who thinks it would be cool to install his own wireless hub, and suddenly you're broadcasting your data to the coffeeshop across the street.

  • Michael (unregistered) in reply to doc0tis
    Anonymous:

    Alex Papadimoulis:
    "This way, if the Accounts Payable system ever needed to know who checked in some code to the Source Control system, it'd be a simple Web Service call."

     

    I love it. What AP module is complete without a code checkout report?

     

     

    Since everyone seems to think this is ridiculous,  allow me to give you a real world example:

    GlobalEnterprises has many contractors working as programmers.  Some contracts are for an hourly rate, some contracts are for the completion of some amount of work, regardless of hours.  Either way, GlobalEnterprises only pays OffshoreStaffing for actual work done. 

    So when OffshoreStaffing sends GlobalEnterprises a bill, AP wants to make sure they've actually earned that money.  How do you know what work a programmer has done?  Check source control.

     Also, like many large corporations, GlobalEnterprises pays these contracts out of the budget of the sub-department or project that the contractor is doing the work for.  Contractors often work for multiple sub-department or projects over the course of a month, week, or even day.  How do you know who the programmer was doing work for?  A well structured source control system can tell you that too.

     

  • anonymous (unregistered) in reply to JL

    Anonymous:
    Ideally, you want multiple hurdles ("defense in depth"), so that the failure of one security measure is not a catastrophe.  In this case, all it takes is one guy who thinks it would be cool to install his own wireless hub, and suddenly you're broadcasting your data to the coffeeshop across the street.

    SSSSshhhhh... .I dont want people to know that, and work from the coffeshop across the street. 

  • Guything McThingGuy (unregistered) in reply to SneWs

    Sorry to be a grammar nazi - the correct past tense of "splend" is "splent".  As in "Fantastic, I just splent... hahaha, just splent :D".

  • Chris Travers (unregistered)

    One open source project I used to work with (and no longer do) had a very similar problem.  The application appeared to enforce username/password requiremnets for access, but I started to notice some times when changing the username in the URL was sufficient to grant access as another user.  I did some digging and discovered that the author was using session id's to track access through a session.  Fair enough.  Except that the session id wasn't stored on the server anywhere.  Looking through the code, I discovered the line:

    $session_id = localtime;

    And of course comparisons to show that the session id was sufficiently new as to be valid (bad guys don't have clocks, right?) but that was it.  The developer was unhappy when we brought this first to his attention then a few months later to the attention of the public.  In fact it took full disclosure and another two weeks to get the problem fixed despite the fact that by this point the developer had known about the problem for over a year and had been notified by multiple people.

    Since full disclosure happened a  couple months ago, I figure I am free to say that this affected the popular open source accounting application SQL-Ledger, from version 2.4.4 to 2.6.17.  Partly because of this issue, the LedgerSMB project was born.

     Moral of the story:  Security is not something to be taken for granted.

  • Chris Travers (unregistered) in reply to anonymous
    Anonymous:

    I am web developper, and I have no idea the right way to implement that.

    But I think can something about that:

    • Client ask for service XYZ.
    • Server give unique string "challenge",
    • Client concatenate "challenge" and "password" and create a md5 hash of that.
    • Client send that hash withing service call.
    • Server concatenate "challenge" and "password", create md5 hash and compare with the one the client send. If match, able to run the service, else detailless error (ERROR 501 and nothing else more informative).

    I think this idea is weak because you need to store the user passwords at clientside and at serverside. Is better to forget passwords and only store a hash of the original password serverside :I. Other problem: you need to do 2 calls to get the data, and the server need some sort of session, and the result can be man in the midle weak.

    How to enhance that? 

     

    Why re-invent the square wheel?

    Use SASL, GSSAPI, X.509, or any other accepted standard.

    Re-inventing NTLM gives you Man in the middle issues.  I.e. if I can fool you into thinking I am the server (i.e. MITM), I can request the challenge, pass it through to you, and then send my own request in instead of yours.   What is worse is that if you use this for internal network authentication (as NT4 does), the attacker doesn't even have to be in the middle.

     In the end, Kerberos/GSSAPI solves these problems quite well. X.509 solves them another way, and even an extensible system like SASL is likely to have a lot more testing than your app will ever have.

  • enterprisey! (unregistered) in reply to stimmell
    stimmell:

    developer y: "We did some research and believe that upgrading to a 64-bit platform will solve our problems by extending the memory limitations for ISS worker processes"

    manager x: "What are you talking about?! 64-bit? That doesn't even exist! Come back with something realistic."

    developer y: "Isn't your lab top 64-bit?"

    manager x: "No, and I think I would know."

    developer y: "See that sticker right there next to your keyboard? Doesn't that say 64-bit?"

    manager x: "Well so what, I don't see how improved graphics could help our reporting problems"
     

    oh, that's good. that's priceless. where do i send my check - this kind of entertainment isn't cheap!
    captcha: 'java' - manager - no, a cup of coffee isn't enterprisey!

  • ASDF (unregistered) in reply to JL
    Anonymous:

    Dragnslcr:
    To be fair, it seems like this system is only for internal use. If that is the case, they shouldn't have to worry about someone packet sniffing the password

    Your suggestion might be valid in a very small company, but it's clearly a bad idea here.  As the article says, they already have a security system that limits which users can get into which applications and what data they can see. Presumably these limits exist for a reason -- perhaps legal or regulatory -- and the web service completely circumvents them.

    ...if someone who shouldn't be inside the network is inside the network, they have other problems.
    By similar reasoning: "Why have computer security at all?  If there's someone out there with malicious intent, you have other problems."  Blindly trusting everyone inside the network is ill-advised because it sets exactly one (low) hurdle between an attacker and the data.  Ideally, you want multiple hurdles ("defense in depth"), so that the failure of one security measure is not a catastrophe.  In this case, all it takes is one guy who thinks it would be cool to install his own wireless hub, and suddenly you're broadcasting your data to the coffeeshop across the street.

    No, your "Why have computer security at all?" missed one important point - in any organisation with valuable information, each employee, that is, everyone "inside" the network is bound by a contract and/or an NDA which makes them quite liable under both internal, civil and criminal justice systems in the event of a security breach or information leak no matter how insignificant it is.

    The general public, that is, everyone "outside" the network can only be pursued through criminal justice.

     Therefore it is imperative to protect the network from external users, but some leniency can be given to internal users as they are more trustworthy, and can be punished more effectively.

     

    Although, yes, if you want real security, the internal users should also be scrutinised as much as public users...
     

  • promiscuous guy (unregistered) in reply to Anonymous Tart
    Anonymous:

     Ever heard of switches?

    Switch your adaptor to promiscuous and two things happen at our company,

    1) You find out you cant actually sniff anything not going to or from your local box

    2) You find my boot up your arse, and a P45 in the post for breaking computer use policy

    That's all very well, but how do you know if someone has a network adaptor running in promiscuous mode?

  • pdavis (unregistered)

    OK, I don't think this code is really too far off the mark. I'd like to try and incorporate all the good ideas from the previous posts together....

    First off, we must assume that the connection to the web service will be made over SSL. we should probably add an if/then statement to confirm that an https request was made.

    Second, lets assume that the programmer that went to implement this was at least smart enough to know not to hard code the password in the web service and decided to put the password in the .config file, system registry, or even a database. It isn't even that much of a stretch to assume that the programmer will know to encrypt the password with a one way encryption scheme before storing it.

    So far so good, this isn't really that far from a secure application, especially if we consider that most large networks now consist of switches rather than hubs which decrease the chance of network sniffers grabbing the SSL encrypted passkey.


    Next, we really don't want a two pass authentication scheme if we can help it and we really don't want the calling application to always pass the same passkey so what we can add to the mix is combining the password with a rolling key that is known and agreed to by both the calling application and the web service. A good candidate for this is a part of the date combined with something else and encrypted and then combined with the password and encrypted and sent to the web service. In this way, even if the password is intercepted from the SSL connection, it will be time sensitive and only function for a short period of time (minutes, hours, or probably at most a day depending on the scheme used). Other shared datasets could also be used for the rolling key. Be aware that if the rolling key repeats itself too often that this can present a potential security hole.

    How does this sound did I miss anything?
     

  • Tommy (unregistered) in reply to promiscuous guy
    Anonymous:

     Ever heard of switches?

    Switch your adaptor to promiscuous and two things happen at our company,

    1) You find out you cant actually sniff anything not going to or from your local box

    2) You find my boot up your arse, and a P45 in the post for breaking computer use policy

    Ever heard of ARP poisoning the target computer (so the targetted computer will send all its packets to the attacker's pc) or overflowing the switch's MAC table (basically turning the switch into a hub)?

    Given your ignorance and attitude, it seems like you're in the wrong position at your company. Maybe we should post it to the daily wtf. Oh wait...

     

    Anonymous:

    That's all very well, but how do you know if someone has a network adaptor running in promiscuous mode?

    See if they are replying to packets they shouldn't be replying to or if they are doing dns lookups for IP's they're not accessing or anything similar. Furthermore, to start the attack in the afforementioned ways, the attacker has to inject some packets into the network which could be picked up by IDS (intrusion detection software) running from the switch's monitoring port.

  • AC (unregistered) in reply to stimmell
    stimmell:

    I work with someone who used to be a developer, but "failed upward". Not only did his solutions hardly ever work consistently, but his design practices were, well, terrible. His code was generally a uncommented mess of hard-coded global variables with a total lack of any sort of object oriented design, or any distinguishable design pattern for that matter, with no traces of any sort of error handling. So he has his new position and decides to start exercising some of his new found authority. Our reporting department had been complaining that some of their reports take too long to generate. Although this is to be expected with multi-hundred-thousand row reports consisting of sometimes a decade's worth of production data, we did what we could to optimize our reporting tools.

    Ultimately, what it boiled down to was the memory limits for ISS worker processes on 32-bit platforms. The obvious solution was to upgrade our aging reporting server to a more robust 64-bit platform. So when the aforementioned individual called a meeting with the reporting department to discuss possible solutions we brought that suggestion to the table. Person X's response to our suggestion was classic. Mind you that this person was promoted all the way to the top of the development chain.

    developer y: "We did some research and believe that upgrading to a 64-bit platform will solve our problems by extending the memory limitations for ISS worker processes"

    manager x: "What are you talking about?! 64-bit? That doesn't even exist! Come back with something realistic."

    developer y: "Isn't your lab top 64-bit?"

    manager x: "No, and I think I would know."

    developer y: "See that sticker right there next to your keyboard? Doesn't that say 64-bit?"

    manager x: "Well so what, I don't see how improved graphics could help our reporting problems"
     

     

    Give me a break, if your reporting application requires the memory provided by the addressing of a 64 bit system you have some serious problems. Multi thousand row reports can be handled in less than 16MB of memory with good design.

    Today, I am working on a web application that handles multi-hundred thousand record tables that will run in less than 16 MB of memory (Apache, PHP - though I could do the same with IIS,PHP or ASP). This application will perform just as well on the same system with millions of records in the database to report. The only limit is the maximum table size of the DBMS or disk space... system RAM has nothing to do with it.

    64 bit systems are for huge supercomputing applications, it is sad to see how many people have bought into thinking its needed for a standard web application or to play FPS games. I mean really how many webservers or FPS game clients out there really need more than 4GB of memory?

    There's one born every minute... 

Leave a comment on “The Super Secure Web Service”

Log In or post as a guest

Replying to comment #:

« Return to Article