Der Superblog

⌨️ Yet another "AI" blog entry / rant

I wouldn't have thought that one of my first entries would be about AI. As a programmer with professional experience of over 15 years I am watching the development in this space with a lot of scepticism, mainly because of

It's not intelligent

This is the most important point in my opinion that many (most) people don't get. The term 'AI' is misleading, to say the least. It doesn't "think" or know physics or math the way one would assume. It feels like 99% of what we call "AI" nowadays is just generating the next token, based on probabilities. That's why I'm going to use "AI" with quotation marks here.

AI = genAI?

In this article I will probably be mostly ranting about generative "AI", because it seems that that is the form of "AI" that is being pushed down peoples throats the most nowadays. Also I am ranting most about the "generating" part - where "AI" tries to create something new.

Explain it to me like I'm a five-year-old

Not only the "AI" should learn to say "I don't know" - people should, too. There is no harm in admitting to not know something or not being an expert in something and asking someone to explain. Most people have absolutely no idea how "AI" works. At work I participated in an internal mentoring program for 6 months where we went through the basics and also got into technicalities about the topic, deploying and tweaking our own "AI"s. Only barely scratching the surface. There should be much more content floating around explaining how "AI" works, explaining it to my mom. Or school kids. Or the average person with their iPhone marveling at the incredible feats of "technology".

Am I holding it wrong

Whith this heading I am referring to the iphone 4 antenna problem, because whenever I try to use "AI" in a professional setting, the problem must be in front of the screen, right? Why else would everyone be praising this technology when in reality it is rather bad at actually being helpful. As a programmer, working in the depths of the source code of a not-even-that-big-project at best the "AI" is helping me as a better autocomplete. For the "AI" to really be helpful, would it need to know

How many "tokens" would that cost nowadays? And how long would it take? Would it even take you in the right direction or just give you an incredibly complex answer that would take ages to fully comprehend and go through, only to come to the conclusion that it's not what you wanted? It doesn't have all that context - so it makes something up. Making obvious mistakes that give you errors in the IDE isn't even the worst problem.
A colleague told me "AI" is helping him with coding - not directly in his case, but the links it provides can be helpful. For example searching the Azure documentation can be a daunting task by hand (even through the provided search itself), but apparently the "AI" finds the relevant links quite well. The solution itself it provides on the side though is often garbage.

Junior developer coding buddy

I have read often that "AI" is great for helping junior developers learning to code. Here my internal alarm bells start ringing quite loudly. People think having their "AI" prompt next to the code is like putting on the Iron-Man suit. Instant 10x developer. In my opinion there is just no way around "learning with your butt". Sitting down and trying to understand what is happening, step by step, "by hand". What will happen if you jump over the first, let's say, 10.000 steps and let the "AI" generate the Netx.js store page for you that you've always wanted? Assuming that it is able to do that. You have no idea how the data flows through your program, what the structure is, why the structure is like that, alternative solutions etc.
Now you have your blob of code and you start clicking around in your shiny new web interface for your amazing store page and want to add a tiny sticker saying "built with AI", because why not. The "AI" will gladly do that, but also generate anew a bunch of stuff around that newly added code, because it re-generates much more than you wanted. And now everything breaks. Or seemingly nothing breaks, but something stopped working. And so on ...

Infinite code review

Everyone loves code reviews, right? So now your amazing "AI" coding buddy will spam you with its lines of code 24/7 if you want. All you have to do is review every line of it, if you're seriously thinking about using what it produces. And/or you start making adjustments. It's not all bad. Just a couple of changes here and there, renaming this and that file. That class should be in another scope. This API is deprecated, let me fix that as well while I'm at it. Soon you have written the whole thing yourself in less time.

Software lifecycle

The creation-part of the software lifecycle is by far the shortest. Writing some code for version 0.1 and shipping it is easy. Evolving that and maintaining it for the next 7 years (before someone thinks about replacing it with something else because upkeep has become so cumbersome) is much more challenging. Especially in the Web-development world things move so quickly (meaning new frameworks are created every day) that keeping your application up to date on all the dependencies and security vulnerabilities can become a full-time job. But you have your "AI"-buddy helping you out. Which doesn't know the difference between Angular version 10 and 13. Or which versions of the other dependencies are compatible with either of those.
One time I asked Copilot to update the dependencies of my React application to the next-higher versions - it suggested lower version numbers.

Creating code is not the problem

I like this article ranting in parts about the same issue:

AI is not coming to solve all our problems and write all our code for us—and even if it wasit wouldn’t matter. Writing code is but a sliver of what professional software engineers do, and arguably the easiest part. Only we have the context and the credibility to drive the changes we know form the bedrock for great teams and engineering excellence..

Having "AI" take the whole project context into account would mean to document everything and feed it back to the "AI". Not sure how that would/could even work technically or how much that would cost.

Is it good for anything?

Yes. It can be useful for

The space is developing

I am hearing a lot of "it will become better in the future", which is what we have been hearing in the technology space for decades. Demos from the big players can look impressive, but are obviously meticulously crafted/chosen singular examples. Yes there is Anthropic's Claude Code with Sonnet 3.7 (and by the time you read this it will be anciently outdated), which I haven't yet tested. Let's just say I'm not holding my breath for that.

What next

I'm sincerely hoping the hype will die down soon and we will be left with the few useful use cases of "AI". 75% of companies are not yet seeing return on investment which makes me wishful that my hopes might come true.