• P (unregistered)

    It shouldn't even be surprising at all when it's mentioned that the applications are on-prom directly ported to the cloud using VMs. I mean, what else do you expect? They're most likely putting databases in VMs too.

  • (nodebb)

    This one made me think of Death by Delete: https://thedailywtf.com/articles/Death-by-Delete

  • Pista (unregistered)

    TRWTF is yet to come: Carrol won't achieve anything, will leave in frustration and the company will start cutting costs by firing.

  • Naomi (unregistered)

    There's another, sneakier WTF here: if the application logs that much data, what are the odds anyone can find what they're looking for? I'm reminded of our very WTF Jenkins setup at work, which logs thousands of lines of boilerplate and a couple dozen useful lines at most - as well as naked stack traces for things like files not being found (which we only care about for cache busting - ie. the file not being there is the desired state), so good luck grepping for errors. :/

  • Peter Wolff (unregistered) in reply to Pista

    Maybe Carrol will write some code to compress the log files (will probably give just a few dozen KiB) transparently.

    And in 2025, we'll see this code here as a legacy contribution of one of those HPCs From Hell.

  • bvs23bkv33 (unregistered)

    that is logrotate's fault, let's reinvent it

  • Some Ed (unregistered)

    I'm reminded of former coworkers whose response to having no formal log retention policy was to retain all logs indefinitely, as well as former coworkers whose response to not knowing what details would be needed in the log was to log damn near everything.

    To be fair, there is something of a point in there, because when you submit to AWS logging, you're submitting to their relentless log rotation, and who could possibly say if they're doing it right and if their customers have any control in how long things are retained?

    What? I don't know who this "The Documentation" is you're referring to, but I somehow feel confident they're not more impressive than The Donald was back when he was insisting on being called that.

  • Jörgen (unregistered)

    The obvious solution is to make a script to increase the disc space automagically.

  • Raj (unregistered)

    Cloud Watch (the AWS log thing) has very limited retention, a few months at most. And ingesting application logs is not free; you also have to pay to search them and export them (if you want to go beyond the retention period).

    So the options are S3, which has very limited search capability unless you register your bucket in Glue catalog, and/or query it in Athena. All added costs. Or you can spin up a really expensive Elastic search cluster.

    For a fee hundred GBs I don't think there's a business case to move away from EBS storage. Maybe use S3 and mount it as a volume if the VM runs Linux.

    So TRWTF is making fun of "expensive" solutions without providing an obviously cheaper one, and TRTRWTF is doing that in an article where you also make fun of management for not understanding why money is not saved in the cloud.

  • Anonymous (unregistered)

    That volume of logging is a bit questionable... I'm currently dealing with an application I got handed where every goddamn little thing is logged... a class was instantiated? Log it. Email was sent? Log that 3 times to be sure.

  • El GNUco (unregistered) in reply to Raj

    Jeff Bezos wants to combat global warming. I would start with bad programming. The Earth literally heals once we fixed your mistakes.

    How much money does he make off of this?

    I bet a lot!


    Looks like they scaled up the entire instance every time they ran out of disk space? Just do S3, everything goes to micro.

    For log analysis, I would run the bash program logwatch in a container on Fargate. Download the S3 files, free inside AWS. Extract, get my reports, and store those. EBS, you would need to manage mounting among different ec2s while S3 support is built in.

    I would get rid of instances entirely. Host large files on platforms so I get access to their community. Bandwidth is always the big cost. Start using APIs and SNS and containers. Start using EC2 like another user only, not a server.

    I wouldn't even do that. Now they ban you across everything for the most minor stuff. I wouldn't dare scrape on AWS. I would just go device to device with everything.

    More programming articles please.

  • (nodebb) in reply to Naomi

    Splunk or some other aggregator - which is what Carrol was recommending.

  • Vilx- (unregistered)

    There exists a more eloquent telling of this story: http://thecodelesscode.com/case/73

  • (nodebb) in reply to El GNUco

    And no VM needed. Seriously, though, how many years ago did that dude finally wander off (or at least into the Sidebar)?

  • Naomi (unregistered) in reply to Developer_Dude

    I wish. I'm still trying to convince senior developers that managing ~40 TFS pipelines by hand isn't sustainable, unfortunately.

  • Chris (unregistered) in reply to Raj

    A few hundred gigs of logs per instance, times dozens or hundreds of instances (or more! this is WRPT-73, so are there 72 other reporting instances?), equals how much disk space?

  • C++apo the Buffoon (unregistered) in reply to emurphy

    Wait until your manager rubs one of the Ini family the wrong way.

    Then at the next conference call, right before everyone hangs up, say:

    Oh!

    Oh yeah!

    Please let accounting know the platform expense bill of $xxx,xxx is going change.

    ... ... ...

    The new price ... is ... ... ... $0.0015.

    When they step to you, just act as surprised as they are.

  • Harris Mirza (unregistered) in reply to Raj

    I don't think that is the case. From the cloudwatch docs:

    "By default, logs are kept indefinitely and never expire. You can adjust the retention policy for each log group, keeping the indefinite retention, or choosing a retention periods [sic] between 10 years and one day."

  • kdgregory (unregistered) in reply to Raj

    Maybe use S3 and mount it as a volume if the VM runs Linux

    In case anyone decides to try this: it's not a good idea. S3 uses atomic writes of the whole object, while loggers keep the file open and append data. If you use something like s3fs-fuse, you'll be trying to write the entire logfile to S3 on every update.

  • Just Me (unregistered)

    I don't have experience developing specifically for the cloud, but log rotate is great for on premise servers. Configure it to rotate daily, compress after 7 days, and delete after 1 year. You can adjust those values to fit your needs (I personally compress after 1 day and keep 1 month of logs). Logs older than that have limited use.

Leave a comment on “Logs in the Cloud”

Log In or post as a guest

Replying to comment #:

« Return to Article