janaagaard 40 minutes ago

A Danish audio newspaper host / podcaster had the exact apposite conclusion when he used ChatGPT to write the manuscript for one his episodes. He ended up spending as much time as he usually does because he had to fact check everything that the LLM came up with. Spoiler: It made up a lot of stuff despite it being very clear in the prompt, that it should not do so. To him, it was the most fun part, that is writing the manuscript, that the chatbot could help him with. His conclusion about artificial intelligence was this:

“We thought we were getting an accountant, but we got a poet.”

Frederik Kulager: Jeg fik ChatGPT til at skrive dette afsnit, og testede, om min chefredaktør ville opdage det. https://open.spotify.com/episode/22HBze1k55lFnnsLtRlEu1?si=h...

darkxanthos 18 minutes ago

It's definitely real that a lot of smart productive people don't get good results when they use AI to write software.

It's also definitely real that a lot of other smart productive people are more productive when they use it.

These sort of articles and comments here seem to be saying I'm proof it can't be done. When really there's enough proof it can be that you're just proving you'll be left behind.

ianbicking an hour ago

There's a hundred ways to use AI for any given work. For example if you are doing interesting work and aren't using AI-assisted research tools (e.g., OpenAI Deep Research) then you are missing out on making the work that more interesting by understanding the context and history of the subject or adjacent subjects.

This thesis only makes sense if the work is somehow interesting and you also have no desire to extend, expand, or enrich the work. That's not a plausible position.

aaronbrethorst 2 hours ago

The vast majority of any interesting project is boilerplate. There's a small kernel of interesting 'business logic'/novel algorithm/whatever buried in a sea of CRUD: user account creation, subscription management, password resets, sending emails, whatever.

  • bravesoul2 an hour ago

    Most places I worked the setting up of that kind of boilerplate was done a long time ago. Yes it needs maintaining and extending. But rarely building from the ground up.

  • forrestthewoods 2 hours ago

    This depends entirely on the type of programming you do. If all you build is CRUD apps then sure. Personally I’ve never actually made any of those things — with or without AI

    • PeterStuer 25 minutes ago

      You are both right. B2B for instance is mostly fairly template stuff built from CRUD and some business rules. Even some of the more perceived as 'creative' niches such as music scoring or 3D games are fairly route interactions with some 'engine'.

      And I'm not even sure these 'template adjacent' regurgitations are what the crude LLM is best at, as the output needs to pass some rigorous inflexible test to 'pass'. Hallucinating some non-existing function in an API will be a hard fail.

      LLM's have a far easier time in domains where failures are 'soft'. This is why 'Elisa' passed as a therapist in the 60's, long before auto-programmers were a thing.

      Also, in 'academic' research, LLM use has reached nearly 100%, not just for embelishing writeups to the expected 20 pages, but in each stage of the'game' including 'ideation'.

      And if as a CIO you believe that your prohibition on using LLMs for coding because of 'divulging company secrets' holds, you are either strip searching your employees on the way in and out, or wilfully blind.

      I'm not saing 'nobody' exists that is not using AI in anything created on a computer, just like some woodworker still handcrafts exclusive bespoke furniture in a time of presses, glue and CNC, but adoption is skyrocketing and not just because the C-suite pressures their serves into using the shiny new toy.

voxelghost an hour ago

I don't have LLM/AI write or generate any code or document for me. Partly because the quality is not good enough, and partly I worry about copyright/licensing/academic rigor, partly because I worry about losing my own edge.

But I do use LLM/AI, as a rubber duck that talks back, as a google on steroids - but one who needs his work double checked. And as domain discovery tool when quickly trying to get a grasp of a new area.

Its just another tool in the toolbox for me. But the toolbox is like a box of chocolates - you never know what you are going to get.

lubujackson 4 hours ago

12 months ago called, they want their thesis back.

  • ssivark 2 hours ago

    Curious to see examples of interesting non-boilerplate work that is now possible with AI. Most examples of what I've seen are a repeat of what has been done many times (i.e. probably occurs many times in the training data), but with a small tweak, or for different applications.

    And I don't mean cutting-edge research like funsearch discovering new algorithm implementations, but more like what the typical coder can now do with off-the-shelf LLM+ offerings.

  • bravesoul2 an hour ago

    Oh it's feels like crypto again. Outlandish statements but no argument. "Few Understand" as they say.

bitwize 2 hours ago

But... agentic changes everything!