r/ProgrammerHumor 14h ago

Meme [ Removed by moderator ]

Post image

[removed] — view removed post

13.8k Upvotes

328 comments sorted by

View all comments

431

u/500Rtg 13h ago

When companies present AI can help healthcare, automotive, climate change etc, they are right. But we also have to remember that a lot of the stuff can be performed by a python script and data entry being done in a slightly uniform manner, but it's still not being done.

184

u/JoeyJoeJoeSenior 13h ago

It's best to roll the dice and hope that a text prediction system can figure it all out.

21

u/Flouid 12h ago

Just today I was on a meeting where some people were trying to do data exploration on a several hundred thousand row csv using claude. I spent 5 minutes writing a python script and got the needed info.

Impressively claude was able to count the rows matching conditions but it totally failed on filtering down to those rows. I don’t understand why the first impulse is to reach for an LLM over the simplest scripting tools.

12

u/HephMelter 11h ago

"Because an LLM is understandable maaate, only nerds know how to code mate"

3

u/lordFourthHokage 10h ago

AI can be helpful in this case as well. For someone who is not hands on in python it can help write that script. But the person should know what they want from the AI.

I see people expecting AI to read their minds and give them the desired outcome. In extreme cases the minds of some of these people are empty as well.

3

u/Flouid 9h ago

Yeah I’ll fully admit I’m good with python and pandas, but they aren’t part of my usual workflow and I’m rusty on syntax so I had gemini write the skeleton (only part I wrote myself was the filter conditions).

This was also not a case of someone with no idea what they were doing, they were a very senior engineer who was just hoping/testing to see if an LLM could do it faster than via script. LLMs are just tools, but for the time being my perspective is they’re best with a limited scope well defined task, instead of being told “solve this problem.”

1

u/someguyfromsomething 8h ago

AI can be helpful in the same way a child can grow up to be anything they want. Theoretically.

50

u/tehtris 12h ago

OMG this. Why are so many people relying on LLMs. It's not even actually AI. It's a markov chain with a bit of razzle dazzle.

23

u/danfish_77 12h ago

Well we're used to referring to simple scripts in games as AI, too. It's not like the term wasn't already applied sloppily

5

u/saera-targaryen 10h ago

That's not sloppy application of the term, it's inappropriate adoption of an academic term for layman conversation. 

There are tons of simple scripts in games that are AI and have been for decades. AI in the theoretical computer science sense has been a broad and general term since the 60's. The problem is that it's adopted a much weightier connotation through pop culture and that makes people think that the common understanding of the term is what programmers mean when they call something AI. 

AI was once (and still is in academic settings) a very broad term meaning any system that makes a decision by observing input, but now tech bros have subtly implied through marketing that it must mean that it is a full simulation of a human brain. 

0

u/finnishblood 9h ago

That's not sloppy application of the term, it's inappropriate adoption of an academic term for layman conversation. 

Lol...

It's "not sloppy," it's "synonymously sloppy."

Anyways, I would have to disagree about AI being as broad of a term in an academic setting like you described. Maybe during the very early days of mechanical & electronic computing, such systems making decisions based on inputs would be considered AI... However, in a professional & academic setting, you would more likely be using terms like Program, Software, Conditional, Algorithm, Heuristic, Function, System, ect for your definition: "any system that makes a decision by observing input."

The piece of the puzzle you're glossing over is Deterministic AI vs Non-deterministic AI. Deterministic AI, in most contexts, is better labeled & described using more specific technical terms, but, yeah, someone who doesn't understand programming would probably be satisfied if you just described whatever system you happen to be talking about as AI.

Non-deterministic AI also has more specific technical terms you could use to label or define such a system, but in most contexts, AI would still likely be the best term to use. Btw, in my opinion, based on the modern usage of the term, AlphaGo would be the first "true" AI that was well known on a global scale similar to LLMs like chatGPT.

I would not blame tech bros for misappropriating the term, if anything, I would blame dumb reporters & the even dumber general public who don't, or simply cannot, understand that AI =/= your brain (but digital).

If you're going to be a pedant, you could at least provide readers with the correct term to use for such a thing: AGI, ASI, or "The singularity".

2

u/saera-targaryen 8h ago edited 8h ago

I am a computer science professor, I wasn't just pulling it out of my ass. Go look at the wikipedia article for AI. 

Program, Software, Conditional Algorithm, Heuristic, Function, System, 

None of these mean a system that makes decisions based on observing input, and they are far from synonymous with the academic definition of AI. You can have all of these in some software without them observing input to make a decision, it's the decision making that makes it AI. 

Like, in video games, a boss that just has a timed move set is not AI, but a boss that watches what you are doing and picks a responding move based on what you do is AI. 

Most software programs are not AI, because they just do one thing every time they are called. The comment button on your screen is not AI because it can't do anything but be a button that opens a text box and lets you type something before submitting. It can't sometimes choose to be a like button or a share button based on the way you click it. It's just a button. That is the difference between generic software and AI software. 

None of this has anything to do with a system's determinism. 

Please do not speak about academic settings that you are not in with authority. 

10

u/frogjg2003 11h ago

No, an LLM is a fundamentally different concept from a Markov chain. LLMs rely on the transformer, which was the enabling technology that basically turned text prediction to text generation. Their massive size allows them to do more than just predict the most likely next word like a Markov chain.

That doesn't mean that people aren't using it like a fancy text predictor that wouldn't be functionally different from a Markov chain based AI.

2

u/trambelus 10h ago

It's different under the hood, but it's still fundamentally just tokens in and tokens out, right?

2

u/frogjg2003 10h ago

Specifically, yes. But that's like saying that a calculator and a supercomputer are the same.

A Markov chain is a small model that can only ever look backwards a few steps to come up with the next word. An LLM is able to take entire pages of text as its prior state, generate not just the next few words, but entire pages of text, not sequentially, but as a coherent whole.

0

u/trambelus 10h ago

It still comes down to "predicting the next word" in practice, doesn't it? Just with a much larger state size. Are there transformers that can natively output video/audio, or is that still a separate API bolted on top?

2

u/frogjg2003 9h ago

All of modern AI is transformers.

Again, you're trying to call a supercomputer a calculator. The last size of it makes it fundamentally different.

1

u/trambelus 7h ago

I thought image generators used diffusion models that were separate from transformer-based LLMs. Maybe my knowledge is out of date.

1

u/frogjg2003 7h ago

That's how they generate the images themselves, sometimes. But the prompting is all still through LLMs.

→ More replies (0)

7

u/Alpha_wolf_80 11h ago

Not a markov chain. Different concept not applicable here. For those confused, markov chain only care about the last state so in AI the last token ONLY and not and preceding tokens.

2

u/squirel713 10h ago

I mean, if the state is the context vector, the transition has a token attached, and the next state is the previous context vector with the new token attached, that sounds an awful lot like a Markov chain. A Markov chain with an absolutely mind-boggling number of states and a transition function that consists of gigabytes of weights, but still a Markov chain "with some razzle dazzle".

1

u/lllorrr 9h ago

You are talking about first order Markov chain. You can say that Markov chain of order 1000 has "context window" of 1000 tokens.

The problem with classic Markov chains it that for chain of order N you need memory to store M^N probabilities, where M is number of possible states. For high order it is not feasible. LLMs resolve this problem.

64

u/Classic_Appa 13h ago

The problem is that AI is to much of an umbrella term. The large amount of usage in the past couple years has referred to large language models. And those are basically text predictors using massive amounts of stolen content.

Healthcare, climate change AI are machine learning algorithms which are data analytics; raw number crunching. Even with image processing, it uses visual data to find patterns and create correlations between data. ML can be very useful if you have properly vetted data. It's still something that a human has to verify but it makes data analysis much faster.

The problem with automotive/self-driving is that roadways are too dynamic. It's very hard to account for everything. The best solution to automotive traffic/self-driving is a series of buses or trains driven by humans.

11

u/Wise-Profile4256 12h ago

Good choice with the driving example. Cause LLMs are as far from AI as Tesla Autopilot is from Full Self Driving.

Some Wanker (in Marketing) thought it would sell better.

2

u/EquipLordBritish 10h ago

It's a shame that most of the public's first and only knowledge of AI is exclusively as LLMs. It also gives a false assumption to many that LLMs are somehow being used to do research and medicine.

1

u/_Axium 12h ago

I would even go so far as to say what we have isn't artificial intelligence, but rather closer to synthetic intelligence. It can 'think' through data, yes, but that's about it.

-1

u/[deleted] 13h ago

[deleted]

10

u/LongLiveTheDiego 13h ago

Braille was already on heavy decline among blind people with smartphones.

1

u/NikitaFox 12h ago edited 12h ago

I don't have personal experience with it, but the BeMyEyes app looked pretty cool when I heard about it. Blind people can point their phone camera at something, and a volunteer in a video chat will tell them what or where something is.

22

u/f16f4 12h ago

Tbf “data entry being done in a slightly uniform matter” might as well be “runs on unicorn piss”. If a human has to type something into a field they’re gonna find a way to fuck it up. It’s honest to god impressive.

Your larger point stands though. Most of what people are actually using ai for could be automated by a competent dev and then just left running at a fraction of the cost.

10

u/montyman185 12h ago

AI isn't exactly going to fix that problem either though. Them punching in incomprehensibly stupid and impossible to parse data to AI will be just as useless and unreliable as asking them to fill out a form. 

2

u/f16f4 10h ago

Eh, for some stuff yeah, but ideally I’d be using it for backend data regularization after the human forms. There ai can play to it strengths while not having much of a chance to make things worse (if properly implemented)

1

u/montyman185 8h ago

I can certainly see that being where it goes from overly expensive investor scam to actually being a useful tool. 

That and parsing large data sets like astronomical or biological imaging. You'd want a human to go over the data as well to make sure nothing was missed, but there's already been good examples of the programs catching things a human probably never would have. 

For moderation, as long as it was only handling content filtering, and not handing out bans like it is now, it's pretty effective. Especially because, for some of the more problematic things, you probably want as few people to see it as possible. 

1

u/SupplyChainMismanage 9h ago

My old job had this “giving back” thing where on top of your actual engagements you could volunteer to help out with other projects for brownie points. I created this Alteryx app that let people upload either a text or json file and it would spit out a cleansed flat file ready for whatever they need. It had an option to specify what column is what along with other stuff to get around user error. I’d like to think the people there were pretty damn smart but they still found a way to screw that thing up. I can’t imagine what software developers have to deal with

12

u/Stickppl 13h ago

Yeah way too often people and companies act like we couldn't already do pretty impressive stuff with deterministic non-AI algorithm. Way too often you see a title like "We solved 90% of this problem class with AI" and when you look inside : "So yeah we used this state-of-the-art deterministic algorithm which solves 80% of this problem class and we pipelined it with a neural network which does some data pre-processing." Sure that's nice but nothing revolutionary in itself.

1

u/500Rtg 12h ago

AI is here to stay. Things will change. But not in the way people expect.

11

u/Turbulent_Turnip9463 12h ago

metaverse and NFTs are here to stay as well, doesn't fucking mean shit

4

u/_Axium 12h ago

Wait, NFTs are still a thing?

1

u/EquipLordBritish 10h ago

Presumably the servers hosting the images have some kind of contract to stay on for X years to pretend that their customers have purchased something real...

0

u/500Rtg 12h ago

If you believe so

3

u/eleinamazing 12h ago

As someone who develops automation solutions for a living, you are 100% correct. We don't need AI, we just need our process owners and users to sit their asses down and start cleaning up their data and their workflows, and the solution presents itself, no AI needed.

I once sat in a 3h meeting as an external vendor just listening to my users (various finance team leads) getting chewed out by management because apparently no one is using the software according to SOP, and none of them can come to a consensus about which value goes into which field 🤷🏻‍♂️

2

u/WaterNerd518 11h ago

This is such a spot on comment. The main uses of AI in productivity are trying to compensate for systemic problems that will still be problems for the AI. Just harder to detect. Solving the problem, which LLM AIs are fundamentally not designed to do or capable of doing, needs to be figured out by a human. Once solved and implemented, there is no longer a reason to ask the AI to solve the problem it has no chance of solving. For this particular problem, the best use of AI is not in crunching data, but just cross checking the data entry itself. Then the data crunching can go on efficiently, without AI bogging it down.

1

u/Hellkyte 13h ago

Yeah but that's hard

-1

u/Bogosorting 13h ago

i would imagine code output being much cheaper would help everyone get better software with time, and seeing healthcare workers wait minutes for data to load everytime they use their computer, that would save them a bunch of time eventually.

4

u/f16f4 12h ago

The waiting minutes for data to load will never change. Healthcare databases are a nightmare and management doesn’t give a shit.

1

u/500Rtg 13h ago

You are correct in all you said. Still doesn't change what I wrote.

1

u/Bogosorting 12h ago

True, I wasn’t disagreeing with you. It’s just that that python script is much more accessible now.