• (cs) in reply to Olddog
    Olddog:

    The bot did the damage (as proven).

    Whilst that's necessary for a case to exist at all, it's not sufficient for establishing liability. The damage has to be foreseeable, proximate, and preventable - at least, that's (broadly speaking) the case under English common law; I'm led to believe that American law is similar.

    If my robotic lawn-mower ( on it's own ) somehow finds it's way into my neighbors fence-less garden, I'm safe... right? It's a robot. Or... am I responsible for it's actions?

    We need more information. Is your own garden fenced? Does your robotic lawn mower have a radius of action, or some other means of detecting the boundaries of your lawn? Does it instruct you to not leave it unsupervised? What damage was done to the neighbour's property? They might have a case if it'd destroyed some rare and precious flowers, for example, but they'd have a hard time arguing that you were liable for killing their pet goat, or that you owed them a new house.

    (IANAL; I was just a law student until a couple of weeks ago. This isn't legal advice; as noted above, you have to pay for that - though I do believe it's common sense. The person who represents themselves has a fool for a lawyer.)

  • R (unregistered)

    I knew the dangers of client-side authentication at the ages of 12. Honest. Not that I had any idea how to implement anything else, of course (I had it asking for a password before a page loaded- no worries about cookies, but turning javascript off stops it, of course).

    I have seen one good form of it: take a username and password, combine them with some sort of algorithm and use that as the url of the page to load. Short of brute forcing the directory for valid html pages, I don't think that can be broken.

  • Olddog (unregistered) in reply to R
    Anonymous:

    I knew the dangers of client-side authentication at the ages of 12. Honest. Not that I had any idea how to implement anything else, of course (I had it asking for a password before a page loaded- no worries about cookies, but turning javascript off stops it, of course).

    I have seen one good form of it: take a username and password, combine them with some sort of algorithm and use that as the url of the page to load. Short of brute forcing the directory for valid html pages, I don't think that can be broken.

    Doesn't that require a web page for each url=secret_function(user,pswd) combination? That seems like a tough way to manager users. But, if you have no server-side options... it's an interesting solution.  Although I wouldn't bet the farm against a brute force directory scan. Also, if it's totally javascript, it could be reverse engineered from the javascript side, given a good list of typical username and passwords.

  • Olddog (unregistered) in reply to gwenhwyfaer
    gwenhwyfaer:
    Olddog:

    The bot did the damage (as proven).

    Whilst that's necessary for a case to exist at all, it's not sufficient for establishing liability. The damage has to be foreseeable, proximate, and preventable - at least, that's (broadly speaking) the case under English common law; I'm led to believe that American law is similar.

    If my robotic lawn-mower ( on it's own ) somehow finds it's way into my neighbors fence-less garden, I'm safe... right? It's a robot. Or... am I responsible for it's actions?

    We need more information. Is your own garden fenced? Does your robotic lawn mower have a radius of action, or some other means of detecting the boundaries of your lawn? Does it instruct you to not leave it unsupervised? What damage was done to the neighbour's property? They might have a case if it'd destroyed some rare and precious flowers, for example, but they'd have a hard time arguing that you were liable for killing their pet goat, or that you owed them a new house.

    (IANAL; I was just a law student until a couple of weeks ago. This isn't legal advice; as noted above, you have to pay for that - though I do believe it's common sense. The person who represents themselves has a fool for a lawyer.)

    Happy holidays.  Congratulations on your new credentials. 

    The robotic lawn mower was a hypothetical, as it relates to internet robots. My point was poorly stated. I was looking for an opinion on the responsibility of ownership, and deployment of a web robot.

    The "fence" was to imply the use of the robots.txt file, which is a marginally accepted gestault directive for robots to follow, if the robot chooses.

    Therein lies the rub. The internet is a robust set of "recommendations" with no enforceable rule that governs what a robot (web paparazzi) can harvest . It's highly possible for older websites NOT to have a robots.txt file. More than likely, it's probable.

    Essentially, those websites *ARE* the garden without a fence.

    It's obviously too much to ask of a robot to forsee the consiquence of it's intentions, so maybe the correct thing for a robot to do is to "knock" first. ( any robots.txt file? ) No? - then go away. That'll never happen.

    Not to mention EULA, which robots tend to ignore, as if they have diplomatic immunity. I know, this is just a pine for the poor-website, but I hope at some future point, someone ( perhaps a programmer/lawyer ) will design a practical license to abate the rampant growth of un-leashed robots.

  • Unomi (unregistered) in reply to cheesy
    cheesy:
    Say you have a poll on your page that uses GETs to submit the votes. Each time a bot visits your page and tried to follow a vote link it will count as a vote. Definitely not as bad as deleting content but still not so great.

     I thought about having a poll and using GETs as well..... Many do actually. What is great is forcing your opinion on a poll, by having the 'vote' URL in an image tag and using it in your signature or as avatar on a very, very busy website.

     Visitors come to the very very busy website, you have your avatar ready and the visitor will make the request for the same vote unnoticed. At least every visitor will be counted once, otherwise multiple times the same vote will be submitted.

     So, urge POST requests on every form you code otherwise......

     And for the Smeagol parody: I was watching the movie at Christmas, so the scene referred to is very fresh in my mind. Well done!

     - Unomi -
     

     

  • Raychan (unregistered)

    I once managed to delete all content from my company's CMS after adding a search feature via HT://DIG. The site had multiple subsites with different user levels, but my boss wanted to be able to do a search on the whole site. So I installed HT://DIG and made it index the site as administrator and ofcourse it did spider all the 'delete-this-page' links. Fortunately there were daily backups.

  • Alex (unregistered)

    Funny, we had exactly the same problem waaaay back in 1998.

    We had a demo application for our customers with clickable 'admin' link. It has a large list of users and their files. So after a few days we found that somebody deleted all files and profiles.

     It turned out, that it was AltaVistaBot. So from this day I remember to put robots.txt on each site that has demo applications with clickable 'delete' links :)
     

  • (cs) in reply to gwenhwyfaer

    hello If you develop web application, how do you perform validations? Most developers I know will use client side JavaScript to do it and assume that the data will be correct when it reach the server.


    smith

    lawyer directory-lawyer directory

Leave a comment on “Best of 2006: The Spider of Doom”

Log In or post as a guest

Replying to comment #:

« Return to Article