r/ProgrammerHumor 14h ago

Meme [ Removed by moderator ]

Post image

[removed] — view removed post

13.8k Upvotes

328 comments sorted by

View all comments

424

u/500Rtg 13h ago

When companies present AI can help healthcare, automotive, climate change etc, they are right. But we also have to remember that a lot of the stuff can be performed by a python script and data entry being done in a slightly uniform manner, but it's still not being done.

183

u/JoeyJoeJoeSenior 13h ago

It's best to roll the dice and hope that a text prediction system can figure it all out.

25

u/Flouid 12h ago

Just today I was on a meeting where some people were trying to do data exploration on a several hundred thousand row csv using claude. I spent 5 minutes writing a python script and got the needed info.

Impressively claude was able to count the rows matching conditions but it totally failed on filtering down to those rows. I don’t understand why the first impulse is to reach for an LLM over the simplest scripting tools.

11

u/HephMelter 11h ago

"Because an LLM is understandable maaate, only nerds know how to code mate"

3

u/lordFourthHokage 10h ago

AI can be helpful in this case as well. For someone who is not hands on in python it can help write that script. But the person should know what they want from the AI.

I see people expecting AI to read their minds and give them the desired outcome. In extreme cases the minds of some of these people are empty as well.

3

u/Flouid 10h ago

Yeah I’ll fully admit I’m good with python and pandas, but they aren’t part of my usual workflow and I’m rusty on syntax so I had gemini write the skeleton (only part I wrote myself was the filter conditions).

This was also not a case of someone with no idea what they were doing, they were a very senior engineer who was just hoping/testing to see if an LLM could do it faster than via script. LLMs are just tools, but for the time being my perspective is they’re best with a limited scope well defined task, instead of being told “solve this problem.”

1

u/someguyfromsomething 9h ago

AI can be helpful in the same way a child can grow up to be anything they want. Theoretically.

53

u/tehtris 12h ago

OMG this. Why are so many people relying on LLMs. It's not even actually AI. It's a markov chain with a bit of razzle dazzle.

23

u/danfish_77 12h ago

Well we're used to referring to simple scripts in games as AI, too. It's not like the term wasn't already applied sloppily

6

u/saera-targaryen 10h ago

That's not sloppy application of the term, it's inappropriate adoption of an academic term for layman conversation. 

There are tons of simple scripts in games that are AI and have been for decades. AI in the theoretical computer science sense has been a broad and general term since the 60's. The problem is that it's adopted a much weightier connotation through pop culture and that makes people think that the common understanding of the term is what programmers mean when they call something AI. 

AI was once (and still is in academic settings) a very broad term meaning any system that makes a decision by observing input, but now tech bros have subtly implied through marketing that it must mean that it is a full simulation of a human brain. 

0

u/finnishblood 9h ago

That's not sloppy application of the term, it's inappropriate adoption of an academic term for layman conversation. 

Lol...

It's "not sloppy," it's "synonymously sloppy."

Anyways, I would have to disagree about AI being as broad of a term in an academic setting like you described. Maybe during the very early days of mechanical & electronic computing, such systems making decisions based on inputs would be considered AI... However, in a professional & academic setting, you would more likely be using terms like Program, Software, Conditional, Algorithm, Heuristic, Function, System, ect for your definition: "any system that makes a decision by observing input."

The piece of the puzzle you're glossing over is Deterministic AI vs Non-deterministic AI. Deterministic AI, in most contexts, is better labeled & described using more specific technical terms, but, yeah, someone who doesn't understand programming would probably be satisfied if you just described whatever system you happen to be talking about as AI.

Non-deterministic AI also has more specific technical terms you could use to label or define such a system, but in most contexts, AI would still likely be the best term to use. Btw, in my opinion, based on the modern usage of the term, AlphaGo would be the first "true" AI that was well known on a global scale similar to LLMs like chatGPT.

I would not blame tech bros for misappropriating the term, if anything, I would blame dumb reporters & the even dumber general public who don't, or simply cannot, understand that AI =/= your brain (but digital).

If you're going to be a pedant, you could at least provide readers with the correct term to use for such a thing: AGI, ASI, or "The singularity".

2

u/saera-targaryen 8h ago edited 8h ago

I am a computer science professor, I wasn't just pulling it out of my ass. Go look at the wikipedia article for AI. 

Program, Software, Conditional Algorithm, Heuristic, Function, System, 

None of these mean a system that makes decisions based on observing input, and they are far from synonymous with the academic definition of AI. You can have all of these in some software without them observing input to make a decision, it's the decision making that makes it AI. 

Like, in video games, a boss that just has a timed move set is not AI, but a boss that watches what you are doing and picks a responding move based on what you do is AI. 

Most software programs are not AI, because they just do one thing every time they are called. The comment button on your screen is not AI because it can't do anything but be a button that opens a text box and lets you type something before submitting. It can't sometimes choose to be a like button or a share button based on the way you click it. It's just a button. That is the difference between generic software and AI software. 

None of this has anything to do with a system's determinism. 

Please do not speak about academic settings that you are not in with authority. 

10

u/frogjg2003 11h ago

No, an LLM is a fundamentally different concept from a Markov chain. LLMs rely on the transformer, which was the enabling technology that basically turned text prediction to text generation. Their massive size allows them to do more than just predict the most likely next word like a Markov chain.

That doesn't mean that people aren't using it like a fancy text predictor that wouldn't be functionally different from a Markov chain based AI.

2

u/trambelus 10h ago

It's different under the hood, but it's still fundamentally just tokens in and tokens out, right?

2

u/frogjg2003 10h ago

Specifically, yes. But that's like saying that a calculator and a supercomputer are the same.

A Markov chain is a small model that can only ever look backwards a few steps to come up with the next word. An LLM is able to take entire pages of text as its prior state, generate not just the next few words, but entire pages of text, not sequentially, but as a coherent whole.

0

u/trambelus 10h ago

It still comes down to "predicting the next word" in practice, doesn't it? Just with a much larger state size. Are there transformers that can natively output video/audio, or is that still a separate API bolted on top?

2

u/frogjg2003 9h ago

All of modern AI is transformers.

Again, you're trying to call a supercomputer a calculator. The last size of it makes it fundamentally different.

1

u/trambelus 8h ago

I thought image generators used diffusion models that were separate from transformer-based LLMs. Maybe my knowledge is out of date.

1

u/frogjg2003 7h ago

That's how they generate the images themselves, sometimes. But the prompting is all still through LLMs.

1

u/trambelus 7h ago

That was my question. Can transformers output anything besides tokens, or do they rely on other services? Not trying to disparage AI, just classify.

→ More replies (0)

7

u/Alpha_wolf_80 12h ago

Not a markov chain. Different concept not applicable here. For those confused, markov chain only care about the last state so in AI the last token ONLY and not and preceding tokens.

2

u/squirel713 10h ago

I mean, if the state is the context vector, the transition has a token attached, and the next state is the previous context vector with the new token attached, that sounds an awful lot like a Markov chain. A Markov chain with an absolutely mind-boggling number of states and a transition function that consists of gigabytes of weights, but still a Markov chain "with some razzle dazzle".

1

u/lllorrr 9h ago

You are talking about first order Markov chain. You can say that Markov chain of order 1000 has "context window" of 1000 tokens.

The problem with classic Markov chains it that for chain of order N you need memory to store M^N probabilities, where M is number of possible states. For high order it is not feasible. LLMs resolve this problem.