- cross-posted to:
- technology@lemmy.ml
- cross-posted to:
- technology@lemmy.ml
I’m a software developer and I know that AI is just the shiny new toy from which everyone uses the buzzword to generate investment revenue.
99% of the crap people use it for us worthless. It’s just a hammer and everything is a nail.
It’s just like “the cloud” was 10 years ago. Now everyone is back-pedaling from that because it didn’t turn out to be the panacea that was promised.
Misleading title. From the article,
Asked whether “scaling up” current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was “unlikely” or “very unlikely” to succeed.
In no way does this imply that the “industry is pouring billions into a dead end”. AGI isn’t even needed for industry applications, just implementing current-level agentic systems will be more than enough to have massive industrial impact.
LLMs are good for learning, brainstorming, and mundane writing tasks.
Yes, and maybe finding information right in front of them, and nothing more
Analyzing text from a different point of view than your own. I call that “synthetic second opinion”
I went to CES this year and I sat on a few ai panels. This is actually not far off. Some said yah this is right but multiple panels I went to said that this is a dead end, and while usefull they are starting down different paths.
Its not bad, just we are finding it’s nor great.
I like my project manager, they find me work, ask how I’m doing and talk straight.
It’s when the CEO/CTO/CFO speaks where my eyes glaze over, my mouth sags, and I bounce my neck at prompted intervals as my brain retreats into itself as it frantically tosses words and phrases into the meaning grinder and cranks the wheel, only for nothing to come out of it time and time again.
COs are corporate politicians, media trained to only say things which are completely unrevealing and lacking of any substance.
This is by design so that sensitive information is centrally controlled, leaks are difficult, and sudden changes in direction cause the minimum amount of whiplash to ICs as possible.
I have the same reaction as you, but the system is working as intended. Better to just shut it out as you described and use the time to think about that issue you’re having on a personal project or what toy to buy for your cat’s birthday.
I think my CEO is doing something wrong then because he seems to be trying to maximize IC whiplash sometimes.
Better to just shut it out as you described and use the time to think about that issue you’re having on a personal project or what toy to buy for your cat’s birthday.
Exactly. Do the daily corpo dance and cheer if they babbling about innovation, progress, growth and new products. Do not fight against it. Just take your money and put your valuable time and energy elsewhere.
Right, that sweet spot between too less stimuli so your brain just wants to sleep or run away and enough stimuli so you can’t just zone out (or sleep).
The number of times my CTO says we’re going to do THING, only to have to be told that this isn’t how things work…
Find a better C-suite
I just turn of my camera and turn on Forza Motorsport or something like that
Optimizing AI performance by “scaling” is lazy and wasteful.
Reminds me of back in the early 2000s when someone would say don’t worry about performance, GHz will always go up.
Thing is, same as with GHz, you have to do it as much as you can until the gains get too small. You do that, then you move on to the next optimization. Like ai has and is now optimizing test time compute, token quality, and other areas.
To be fair, GHz did go up. Granted, it’s not why modern processors are faster and more efficient.
TIL
don’t worry about performance, GHz will always go up
TF2 devs lol
It always wins in the end though. Look up the bitter lesson.
I miss flash players.
They’re throwing billions upon billions into a technology with extremely limited use cases and a novelty, at best. My god, even drones fared better in the long run.
I mean it’s pretty clear they’re desperate to cut human workers out of the picture so they don’t have to pay employees that need things like emotional support, food, and sleep.
They want a workslave that never demands better conditions, that’s it. That’s the play. Period.
If this is their way of making AI, with brute forcing the technology without innovation, AI will probably cost more for these companies to maintain infrastructure than just hiring people. These AI companies are already not making a lot of money for how much they cost to maintain. And unless they charge companies millions of dollars just to be able to use their services they will never make a profit. And since companies are trying to use AI to replace the millions they spend on employees it seems kinda pointless if they aren’t willing to prioritize efficiency.
It’s basically the same argument they have with people. They don’t wanna treat people like actual humans because it costs too much, yet letting them love happy lives makes them more efficient workers. Whereas now they don’t want to spend money to make AI more efficient, yet increasing efficiency would make them less expensive to run. It’s the never ending cycle of cutting corners only to eventually make less money than you would have if you did things the right way.
Absolutely. It’s maddening that I’ve had to go from “maybe we should make society better somewhat” in my twenties to “if we’re gonna do capitalism, can we do it how it actually works instead of doing it stupid?” in my forties.
The oligarchs running these companies have suffered a psychotic break. What the cause exactly is I don’t know, but the game theyre playing is a lot less about profits now. They care about control and power over people.
I theorize it has to do with desperation over what they see as an inevitable collapse of the United States and they are hedging their bets on holding onto the reigns of power for as long as possible until they can fuck off to their respective bunkers while the rest of humanity eats itself.
Then, when things settle they can peak their heads out of their hidie holes and start their new Utopian civilization or whatever.
Whatever’s going on, profits are not the focus right now. They are grasping at ways to control the masses…and failing pretty miserably I might add…though something tells me that scarcely matters to them.
inevitable collapse of the United States
Which they are intentionally trying to cause, rather that deal with their addiction to wealth and power.
And the tragedy of the whole situation is that they can‘t win because if every worker is replaced by an algorithm or a robot then who‘s going to buy your products? Nobody has money because nobody has a job. And so the economy will shift to producing war machines that fight each other for territory to build more war machine factories until you can’t expand anymore for one reason or another. Then the entire system will collapse like the Roman Empire and we start from scratch.
producing war machines that fight each other for territory to build more war machine factories until you can’t expand anymore for one reason or another.
As seen in the retro-documentary Z!
Why would you need anyone to buy your products when you can just enjoy them yourself?
Because there’s always a bigger fish out there to get you. Or that’s what trillionaires will tell themselves when they wage a robotic war. This system isn’t made to last the way it’s progressing right now.
Nah, generative ai is pretty remarkably useful for software development. I’ve written dozens of product updates with tools like claudecode and cursorai, dismissing it as a novelty is reductive and straight up incorrect
I weep for your customers
They’re all pretty fired up at the update velocity tbh 🤷
Yeah, nothing pleases us more than constant, buggy updates.
Don’t be an ass and realize that ai is a great tool for a lot of people. Why is that so hard to comprehend?
It’s not hard to comprehend. It’s that we literally have jackasses like Sam Altman arguing that if they can’t commit copyright violations at an industrial scale and pace that their business model falls apart. Yet, we’re still nailing regular people for piracy on an individual scale. As always individuals pay the price and are treated like criminals, but as long as you commit crime big enough and fast enough on an industrial scale, we shake our heads, go “wow” and treat you like a fucking hero.
If the benefits of this technology were evenly distributed the argument might have a leg to stand on, but it is never evenly distributed. It is always used as a way to pay professionals less for work that is “just okay.”
When a business buys the tools to use generative AI and they shitcan employees to afford it they have effectively used those employees labor against them to replace them with something lesser. Their labor was exploited to replace them. The people who actually deserve the bonus of generative AI are losing or being expected to be ten times more productive instead of being allowed to cool their heels because they worked hard enough to have this doohickey work for them. No, it’s always “line must go up, rich must get richer, fuck the laborers.”
I’ll stop being an ass about it when people stop burning employees out who already work hard or straight up fire them and replace them with this bullshit when their labor is what allowed the business to afford this bullshit to begin with. No manager or CEO can do all this labor on their own, but they get the fruits of all the labor their employees do as though they did do it all on their own, and it is fucked up.
I don’t have a problem with technology that makes our lives easier. I don’t have a problem with copyright violations (copyright as it exists is broken. It still needs to exist, just not in its current form).
What I have a problem with is businesses using this as an excuse to work their employees like slaves or replacing the employees that allowed them to afford these tools with these tools.
When everyone who worked hard to afford this stuff gets a paid vacation for helping to afford the tools and then comes back to an easier workload because the tools help that much, I’ll stop being a fucking ass about it.
Like I said elsewhere, the bottom line is business owners want a slave that doesn’t need things like sleep, food, emotional support, and never pushes back against being abused. I’m tired of people pretending like it’s not what businesses want. I’m tired of people pretending this does anything except make already overworked employees bust even more ass.
Your comment is on capitalism, not scaling ai or ai being used with effect.
What’s hard for you to comprehend about my comment?
You are insulting a person, because they said ai helps them.
Unit tests and good architecture are still foundational requirements, so far no bug reports with any of these updates. In fact a huge chunk of these ai updates were addressing bugs. Not sure why you’re so mad at what you imagine is happening and making so many broad assumptions!
😂
As an experienced software dev I’m convinced my software quality has improved by using AI. More time for thinking and less time for execution means I can make more iterations of the design and don’t have to skip as many nice-to-haves or unit tests on account of limited time. It’s not like I don’t go through every code line multiple times anyway, I don’t just blindly accept code. As a bonus I can ask the AI to review the code and produce documentation. By the time I’m done there’s little left of what was originally generated.
As an experienced software dev I’m convinced my software quality has improved by using AI.
Then your software quality was extreme shit before. It’s still shit, but an improvement. So, yay “AI”, I guess?
That seems like just wishful thinking on your part, or maybe you haven’t learned how to use these tools properly.
Na, the tools suck. I’m not using a rubber hammer to get woodscrews into concrete and I’m not using “AI” for something that requires a brain. I’ve looked at “AI” suggestions for coding and it was >95% garbage. If “AI” makes someone a better coder it tells more about that someone than “AI”.
Then try writing the code yourself and ask ChatGPT’s o3-mini-high to critique your code (be sure to explain the context).
Or ask it to produce unit tests - even if they’re not perfect from the get go I promise you will save time by having a starting skeleton.
Another thing I often use it for is ad hoc transformations. For example I wanted to generate constants for all the SQLSTATE codes in the PostgreSQL documentation. I just pasted the table directly from the documentation and got symbolic constants with the appropriate values and with documentation comments.
As an experienced software dev, I know better than to waste my time writing boilerplate that can be vomited up by an LLM, since somebody else has already written it and I should just use that instead.
If a bot can develop your software better than you then you’re a shit software dev
That’s not what is happening. The bot writes code and then I tell it what to change until it’s close enough, then I make the final touches myself. It’s like having a junior programmer do the grunt work for you.
As someone starting a small business, it has helped tremendously. I use a lot of image generation.
If that didn’t exist, I’d either has to use crappy looking clip art or pay a designer which I literally can’t afford.
Now my projects actually look good. It makes my first projects look like a highschooler did them last minute.
There are many other uses, but I rely on it daily. My business can exist without it, but the quality of my product is significantly better and the cost to create it is much lower.
Your product is other people’s work thrown in a blender.
Congrats.
Wait til you realize that’s just what art literally is…
You’re confusing ai art with actual art, like rendered from illustration and paintings
it’s as much “real” art as photography, taking a relatively finite number of decisions and finding something that looks “good”.
Really good photography is actually pretty hard and the best photographers are in high demand.
It involves a ton of settings for the camera, frequently post processing to balance out anything that wasn’t perfect during the shoot. Plus there is a ton of blocking, lighting, and if doing portraits and other planned shoots there is a lot of directing involved in getting the subjects to be in the right positions/showing the right emotions, etc. Even shooting nature requires a massive amount of planning and work beyond a few camera settings.
Hell, even stock photos tend to be a lot of work to set up!
If you think that someone taking a photo in focus with adequate lighting and posted it to instagram is the same as professional photography, then you have no idea what is involved.
I don’t think any designer does work without heavily relying on ai. I bet that’s not the only profession.
It’s ironic how conservative the spending actually is.
Awesome ML papers and ideas come out every week. Low power training/inference optimizations, fundamental changes in the math like bitnet, new attention mechanisms, cool tools to make models more controllable and steerable and grounded. This is all getting funded, right?
No.
Universities and such are seeding and putting out all this research, but the big model trainers holding the purse strings/GPU clusters are not using them. They just keep releasing very similar, mostly bog standard transformers models over and over again, bar a tiny expense for a little experiment here and there. In other words, it’s full corporate: tiny, guaranteed incremental improvements without changing much, and no sharing with each other. It’s hilariously inefficient. And it relies on lies and jawboning from people like Sam Altman.
Deepseek is what happens when a company is smart but resource constrained. An order of magnitude more efficient, and even their architecture was very conservative.
Good ideas are dime a dozen. Implementation is the game.
Universities may churn out great papers, but what matters is how well they can implement them. Private entities win at implementation.
The corporate implementations are mostly crap though. With a few exceptions.
What’s needed is better “glue” in the middle. Larger entities integrating ideas from a bunch of standalone papers, out in the open, so they actually work together instead of mostly fading out of memory while the big implementations never even know they existed.
wait so the people doing the work don’t get paid and the people who get paid steal from others?
that is just so uncharacteristic of capitalism, what a surprise
It’s also cultish.
Everyone was trying to ape ChatGPT. Now they’re rushing to ape Deepseek R1, since that’s what is trending on social media.
It’s very late stage capitalism, yes, but that doesn’t come close to painting the whole picture. There’s a lot of groupthink, an urgency to “catch up and ship” and look good quick rather than focus experimentation, sane applications and such. When I think of shitty capitalism, I think of stagnant entities like shitty publishers, dysfunctional departments, consumers abuse, things like that.
This sector is trying to innovate and make something efficient, but it’s like the purse holders and researchers have horse blinders on. Like they are completely captured by social media hype and can’t see much past that.
The actual survey result:
Asked whether “scaling up” current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was “unlikely” or “very unlikely” to succeed.
So they’re not saying the entire industry is a dead end, or even that the newest phase is. They’re just saying they don’t think this current technology will make AGI when scaled. I think most people agree, including the investors pouring billions into this. They arent betting this will turn to agi, they’re betting that they have some application for the current ai. Are some of those applications dead ends, most definitely, are some of them revolutionary, maybe
Thus would be like asking a researcher in the 90s that if they scaled up the bandwidth and computing power of the average internet user would we see a vastly connected media sharing network, they’d probably say no. It took more than a decade of software, cultural and societal development to discover the applications for the internet.
It’s becoming clear from the data that more error correction needs exponentially more data. I suspect that pretty soon we will realize that what’s been built is a glorified homework cheater and a better search engine.
what’s been built is a glorified homework cheater and an
betterunreliable search engine.
I agree that it’s editorialized compared to the very neutral way the survey puts it. That said, I think you also have to take into account how AI has been marketed by the industry.
They have been claiming AGI is right around the corner pretty much since chatGPT first came to market. It’s often implied (e.g. you’ll be able to replace workers with this) or they are more vague on timeline (e.g. OpenAI saying they believe their research will eventually lead to AGI).
With that context I think it’s fair to editorialize to this being a dead-end, because even with billions of dollars being poured into this, they won’t be able to deliver AGI on the timeline they are promising.
There are plenty of back-office ticket-processing jobs that can, and have been, replaced by current-gen AI.
AI isn’t going to figure out what a customer wants when the customer doesn’t know what they want.
Yeah, it does some tricks, some of them even useful, but the investment is not for the demonstrated capability or realistic extrapolation of that, it is for the sort of product like OpenAI is promising equivalent to a full time research assistant for 20k a month. Which is way more expensive than an actual research assistant, but that’s not stopping them from making the pitch.
Part of it is we keep realizing AGI is a lot more broader and more complex than we think
The bigger loss is the ENORMOUS amounts of energy required to train these models. Training an AI can use up more than half the entire output of the average nuclear plant.
AI data centers also generate a ton of CO². For example, training an AI produces more CO² than a 55 year old human has produced since birth.
Complete waste.
I think most people agree, including the investors pouring billions into this.
The same investors that poured (and are still pouring) billions into crypto, and invested in sub-prime loans and valued pets.com at $300M? I don’t see any way the companies will be able to recoup the costs of their investment in “AI” datacenters (i.e. the $500B Stargate or $80B Microsoft; probably upwards of a trillion dollars globally invested in these data-centers).
Right, simply scaling won’t lead to AGI, there will need to be some algorithmic changes. But nobody in the world knows what those are yet. Is it a simple framework on top of LLMs like the “atom of thought” paper? Or are transformers themselves a dead end? Or is multimodality the secret to AGI? I don’t think anyone really knows.
No there’s some ideas out there. Concepts like heirarchical reinforcement learning are more likely to lead to AGI with creation of foundational policies, problem is as it stands, it’s a really difficult technique to use so it isn’t used often. And LLMs have sucked all the research dollars out of any other ideas.
Technology in most cases progresses on a logarithmic scale when innovation isn’t prioritized. We’ve basically reached the plateau of what LLMs can currently do without a breakthrough. They could absorb all the information on the internet and not even come close to what they say it is. These days we’re in the “bells and whistles” phase where they add unnecessary bullshit to make it seem new like adding 5 cameras to a phone or adding touchscreens to cars. Things that make something seem fancy by slapping buzzwords and features nobody needs without needing to actually change anything but bump up the price.
I remember listening to a podcast that is about scientific explanations. The guy hosting it is very knowledgeable about this subject, does his research and talks to experts when the subject involves something he isn’t himself an expert.
There was this episode where he kinda got into the topic of how technology only evolves with science (because you need to understand the stuff you’re doing and you need a theory of how it works before you make new assumptions and test those assumptions). He gave an example of the Apple visionPro being a machine that despite being new (the hardware capabilities, at least), the algorithm for tracking eyes they use was developed decades ago and was already well understood and proven correct by other applications.
So his point in the episode is that real innovation just can’t be rushed by throwing money or more people at a problem. Because real innovation takes real scientists having novel insights and experiments to expand the knowledge we have. Sometimes those insights are completely random, often you need to have a whole career in that field and sometimes it takes a new genius to revolutionize it (think Newton and Einstein).
Even the current wave of LLMs are simply a product of the Google’s paper that showed we could parallelize language models, leading to the creation of “larger language models”. That was Google doing science. But you can’t control when some new breakthrough is discovered, and LLMs are subject to this constraint.
In fact, the only practice we know that actually accelerates science is the collaboration of scientists around the world, the publishing of reproducible papers so that others can expand upon and have insights you didn’t even think about, and so on.
This also shows why the current neglect of basic/general research without a profit goal is holding back innovation.
There’s been several smaller breakthroughs since then that arguably would not have happened without so many scientists suddenly turning their attention to the field.
Me and my 5.000 closest friends don’t like that the website and their 1.300 partners all need my data.
Why so many sig figs for 5 and 1.3 though?
Some parts of the world (mostly Europe, I think) use dots instead of commas for displaying thousands. For example, 5.000 is 5,000 and 1.300 is 1,300
Yeah, and they’re wrong.
Says the country where every science textbook is half science half conversion tables.
Not even close.
Yes, one half is conversion tables. The other half is scripture disproving Darwinism.
We (in Europe) probably should be thankful that you are not using feet as thousands-separator over there in the USA… Or maybe separate after each 2nd digit, because why not… ;)
It makes sense from typographical standpoint, the comma is the larger symbol and thus harder to overlook, especially in small fonts or messy handwriting
But from a grammatical sense it’s the opposite. In a sentence, a comma is a short pause, while a period is a hard stop. That means it makes far more sense for the comma to be the thousands separator and the period to be the stop between integer and fraction.
I have no strong preference either way. I think both are valid and sensible systems, and it’s only confusing because of competing standards. I think over long enough time, due to the internet, the period as the decimal separator will prevail, but it’s gonna happen normally, it’s not something we can force. Many young people I know already use it that way here in Germany
Yes. It’s the normal Thousands-separator notation in Germany for example.
But usually you don’t put three 000 because that becomes a hint of thousand.
Like 2.50 is 2€50 but 2.500 is 2500€
Is there an ISO standard for this stuff?
No, 2,50€ is 2€ and 50ct, 2.50€ is wrong in this system. 2,500€ is also wrong (for currency, where you only care for two digits after the comma), 2.500€ is 2500€
what if you are displaying a live bill for a service billed monthly, like bandwidth, and are charged one pence/cent/(whatever eutopes hundredth is called) per gigabyte if you use a few megabytes the bill is less than a hundredth but still exists.
Yes, that’s true, but more of an edge case. Something like gasoline is commonly priced in fractional cents, tho:
I knew the context, was just being cheesy. :-D
Too late… You started a war in the comments. I’ll proudly fight for my country’s way to separate numbers!!! :)
oh lol
I have been shouting this for years. Turing and Minsky were pretty up front about this when they dropped this line of research in like 1952, even lovelace predicted this would be bullshit back before the first computer had been built.
The fact nothing got optimized, and it still didn’t collapse, after deepseek? kind of gave the whole game away. there’s something else going on here. this isn’t about the technology, because there is no meaningful technology here.
I have been called a killjoy luddite by reddit-brained morons almost every time.
Why didn’t you drop the quotes from Turing, Minsky, and Lovelace?
because finding the specific stuff they said, which was in lovelace’s case very broad/vague, and in turing+minsky’s cases, far too technical for anyone with sam altman’s dick in their mouth to understand, sounds like actual work. if you’re genuinely curious, you can look up what they had to say. if you’re just here to argue for this shit, you’re not worth the effort.
What’re you talking about? What happened in 1952?
I have to disagree, I don’t think it’s meaningless. I think that’s unfair. But it certainly is overhyped. Maybe just a semantic difference?
Companies aren’t investing to achieve AGI as far as I’m aware, that’s not the end game so I this title is misinformation. Even if AGI was achieved it’d be a happy accident, not the goal.
The goal of all these investments is to convince businesses to replace their employees with AI to the maximum extent possible. They want that payroll money.
The other goal is to cut out all third party websites from advertising revenue. If people only get information through Meta or Google or whatever, they get to control what’s presented. If people just take their AI results at face value and don’t actually click through to other websites, they stay in the ecosystem these corporations control. They get to sell access to the public, even more so than they do now.
I liked generative AI more when it was just a funny novelty and not being advertised to everyone under the false pretenses of being smart and useful. Its architecture is incompatible with actual intelligence, and anyone who thinks otherwise is just fooling themselves. (It does make an alright autocomplete though).
The peak of AI for me was generating images Muppet versions of the Breaking Bad cast; it’s been downhill since.
Like all the previous bubbles of scam that were kinda interesting or fun for novelty and once money came pouring in became absolut chaos and maddening.
AGI models will enter the market in under 5 years according to experts and scientists.
trust me bro, we’re almost there, we just need another data center and a few billions, it’s coming i promise, we are testing incredible things internally, can’t wait to show you!
We are having massive exponential increases in output with all sorts of innovations, every few weeks another big step forward happens
Around a year ago I bet a friend $100 we won’t have AGI by 2029, and I’d do the same today. LLMs are nothing more than fancy predictive text and are incapable of thinking or reasoning. We burn through immense amounts of compute and terabytes of data to train them, then stick them together in a convoluted mess, only to end up with something that’s still dumber than the average human. In comparison humans are “trained” with maybe ten thousand “tokens” and ten megajoules of energy a day for a decade or two, and take only a couple dozen watts for even the most complex thinking.
Humans are “trained” with maybe ten thousand “tokens” per day
Uhhh… you may wanna rerun those numbers.
It’s waaaaaaaay more than that lol.
and take only a couple dozen watts for even the most complex thinking
Mate’s literally got smoke coming out if his ears lol.
A single
Wh
is 860 calories…I think you either have no idea wtf you are talking about, or your just made up a bunch of extremely wrong numbers to try and look smart.
-
Humans will encounter hundreds of thousands of tokens per day, ramping up to millions in school.
-
An human, by my estimate, has burned about 13,000 Wh by the time they reach adulthood. Maybe more depending in activity levels.
-
While yes, an AI costs substantially more
Wh
, it also is done in weeks so it’s obviously going to be way less energy efficient due to the exponential laws of resistance. If we grew a functional human in like 2 months it’d prolly require way WAY more than 13,000Wh
during the process for similiar reasons. -
Once trained, a single model can be duplicated infinitely. So it’d be more fair to compare how much millions of people cost to raise, compared to a single model to be trained. Because once trained, you can now make millions of copies of it…
-
Operating costs are continuing to go down and down and down. Diffusion based text generation just made another huge leap forward, reporting around a twenty times efficiency increase over traditional gpt style LLMs. Improvements like this are coming out every month.
True, my estimate for tokens may have been a bit low. Assuming a 7 hour school day where someone talks at 5 tokens/sec you’d encounter about 120k tokens. You’re off by 3 orders of magnitude on your energy consumption though; 1 watt-hour is 0.86 food Calories (kcal).
-
It peaked when it was good enough to generate short somewhat coherent phrases. We’d make it generate ideas for silly things and laugh at how ridiculous the results were.
Meanwhile a huge chunk of the software industry is now heavily using this “dead end” technology 👀
I work in a pretty massive tech company (think, the type that frequently acquires other smaller ones and absorbs them)
Everyone I know here is using it. A lot.
However my company also has tonnes of dedicated sessions and paid time to instruct it’s employees on how to use it well, and to get good value out of it, abd the pitfalls it can have
So yeah turns out if you teach your employees how to use a tool, they start using it.
I’d say LLMs have made me about 3x as efficient or so at my job.
Your labor before they had LLMs helped pay for the LLMs. If you’re 3x more efficient and not also getting 3x more time off for the labor you put in previously for your bosses to afford the LLMs you got ripped off my dude.
If you’re working the same amount and not getting more time to cool your heels, maybe, just maybe, your own labor was exploited and used against you. Hyping how much harder you can work just makes you sound like a bitch.
Real “tread on me harder, daddy!” vibes all throughout this thread. Meanwhile your CEO is buying another yacht.
I am indeed getting more time off for PD
We delivered on a project 2 weeks ahead of schedule so we were given raises, I got a promotion, and we were given 2 weeks to just do some chill PD at our own discretion as a reward. All paid on the clock.
Some companies are indeed pretty cool about it.
I was asked to give some demos and do some chats with folks to spread info on how we had such success, and they were pretty fond of my methodology.
At its core delivering faster does translate to getting bigger bonuses and kickbacks at my company, so yeah there’s actual financial incentive for me to perform way better.
You also are ignoring the stress thing. If I can work 3x better, I can also just deliver in almost the same time, but spend all that freed up time instead focusing on quality, polishing the product up, documentation, double checking my work, testing, etc.
Instead of scraping past the deadline by the skin of our teeth, we hit the deadline with a week or 2 to spare and spent a buncha extra time going over everything with a fine tooth comb twice to make sure we didn’t miss anything.
And instead of mad rushing 8 hours straight, it’s just generally more casual. I can take it slower and do the same work but just in a less stressed out way. So I’m literally just physically working less hard, I feel happier, and overall my mood is way better, and I have way more energy.
I will say that I am genuinely glad to hear your business is giving you breaks instead of breaking your backs.
That sounds so cool! I’m glad you’re getting the benefits.
I’m only wary that the cash-making machine will start tightening the ropes on the free time and the deadlines.
That’s very cool.
It’ll be interesting to see how it goes in a year’s time, maybe they’ll have raised their expectations and tightened the deadlines by then.
The thing is, the tech keeps advancing too so even if they tighten up deadlines, by the time they did that our productivity also took another gearshift up so we still are some degree ahead.
This isn’t new, in software we have always been getting new tools to do our jobs better and faster, or produce fancier results in the same time
This is just another tool in the toolbelt.
Are you a software engineer? Without doxxing yourself, do you think you could share some more info or guidance? I’ve personally been trying to integrate AI code gen into my own work, but haven’t had much success.
I’ve been able to ask ChatGPT to generate some simple but tedious code that would normally require me read through a bunch of documentation. Usually, that’s a third party library or a part of the standard library I’m not familiar with. My work is mostly Python and C++, and I’ve found that ChatGPT is terrible at C++ and more often than not generates code that doesn’t even compile. It is very good at generating Python by comparison, but unfortunately for me, that’s only like 10% of my work.
For C++, I’ve found it helpful to ask misc questions about the design of the STL or new language features while I’m studying them myself. It’s not actually generating any code, but it definitely saves me some time. It’s very useful for translating C++'s “standardese” into english, for example. It still struggles generating valid code using C++20 or newer though.
I also tried a few local models on my GPU, but haven’t had good results. I assume it’s a problem with the models I used not being optimized for code, or maybe the inference tools I tried weren’t using them right (oobabooga, kobold, and some others I don’t remember). If you have any recommendations for good coding models I can run locally on a 4090, I’d love to hear them!
I tried using a few of those AI code editors (mostly VS Code plugins) years ago, and they really sucked. I’m sure things have improved since then, so maybe that’s the way to go?
I primarily use GPT style tools like ChatGPT and whatnot.
The key is, rather than asking it to generate code, specify that you dont want code and instead want it to help you work through the solution. Tell it to ask you meaningful questions about your problem and effectively act as a rubber duck
Then, after you’ve chosen a solution with it, ask it to generate code based on all the above convo.
This will typically produce way higher quality results and helps avoid potential X/Y problems.
This is how all tech innovation has gone. If you don’t let the bosses exploit your labour someone else will.
If tech had unions this wouldn’t happen as much, but that’s why they don’t really exist.
It’s not that LLMs aren’t useful as they are. The problem is that they won’t stay as they are today, because they are too expensive. There are two ways for this to go (or an eventual combination of both:
-
Investors believe LLMs are going to get better and they keep pouring money into “AI” companies, allowing them to operate at a loss for longer That’s tied to the promise of an actual “intelligence” emerging out of a statistical model.
-
Investments stop pouring in, the bubble bursts and companies need to make money out of LLMs in their current state. To do that, they need to massively cut costs and monetize. I believe that’s called enshttificarion.
You skipped possibility 3, which is actively happening ing:
Advancements in tech enable us to produce results at a much much cheaper cost
Which us happening with diffusion style LLMs that simultaneously cost less to train, cost less to run, but also produce both faster abd better quality outputs.
That’s a big part people forget about AI: it’s a feedback loop of improvement as soon as you can start using AI to develop AI
And we are past that mark now, most developers have easy access to AI as a tool to improve their performance, and AI is made by… software developers
So you get this loop where as we make better and better AIs, we get better and better at making AIs with the AIs…
It’s incredibly likely the new diffusion AI systems were built with AI assisting in the process, enabling them to make a whole new tech innovation much faster and easier.
We are now in the uptick of the singularity, and have been for about a year now.
Same goes for hardware, it’s very likely now that mvidia has AI incorporating into their production process, using it for micro optimizations in its architectures and designs.
And then those same optimized gpus turn around and get used to train and run even better AIs…
In 5-10 years we will look back on 2024 as the start of a very wild ride.
Remember we are just now in the “computers that take up entire warehouses” step of the tech.
Remember that in the 80s, a “computer” cost a fortune, took tonnes of resources, multiple people to run it, took up an entire room, was slow as hell, and could only do basic stuff.
But now 40 years later they fit in our pockets and are (non hyoerbole) billions of times faster.
I think by 2035 we will be looking at AI as something mass produced for consumers to just go in their homes, you go to best buy and compare different AI boxes to pick which one you are gonna get for your home.
We are still at the stage of people in the 80s looking at computers and pondering “why would someone even need to use this, why would someone put one in their house, let alone their pocket”
I remember having this optimism around tech in my late twenties.
I want to believe that commoditization of AI will happen as you describe, with AI made by devs for devs. So far what I see is “developer productivity is now up and 1 dev can do the work of 3? Good, fire 2 devs out of 3. Or you know what? Make it 5 out of 6, because the remaining ones should get used to working 60 hours/week.”
All that increased dev capacity needs to translate into new useful products. Right now the “new useful product” that all energies are poured into is… AI itself. Or even worse, shoehorning “AI-powered” features in all existing product, whether it makes sense or not (welcome, AI features in MS Notepad!). Once this masturbatory stage is over and the dust settles, I’m pretty confident that something new and useful will remain but for now the level of hype is tremendous!
Good, fire 2 devs out of 3.
Companies that do this will fail.
Successful companies respond to this by hiring more developers.
Consider the taxi cab driver:
With the invention if the automobile, cab drivers could do their job way faster and way cheaper.
Did companies fire drivers in response? God no. They hired more
Why?
Because they became more affordable, less wealthy clients could now afford their services which means demand went way way up
If you can do your work for half the cost, usually demand goes up by way more than x2 because as you go down in wealth levels of target demographics, your pool of clients exponentially grows
If I go from “it costs me 100k to make you a website” to “it costs me 50k to make you a website” my pool of possible clients more than doubles
Which means… you need to hire more devs asap to start matching this newfound level of demand
If you fire devs when your demand is about to skyrocket, you fucked up bad lol
deleted by creator
-
I think the human in the loop currently needs to know what the LLM produced or checked, but they’ll get better.
For sure, much like how a cab driver has to know how to drive a cab.
AI is absolutely a “garbage in, garbage out” tool. Just having it doesn’t automatically make you good at your job.
The difference in someone who can weild it well vs someone who has no idea what they are doing is palpable.
Imo our current version of ai are too generalized, we add so much information into the ai to make them good at everything it all mixes together into a single grey halucinating slop that the ai ends up being good at nothing.
We need to find ways to specialize ai and give said ai a more consistent and concrete personality to move forward.
Imo to make an ai that is truly good at everything we need to have multiple ai all designed to do something different all working together (like the human brain works) instead of making every single ai a personality-less sludge of jack of all trades master of none
Mixture of experts is the future of AI. Breakthroughs won’t come from bigger models, it’ll come from better coordinated conversations between models.
They did that awhile ago, it was a big feature if gpt 3
Lots of people think this. They keep turning out wrong. Look up the bitter lesson
We already did this like a year ago mate. That was like v3 of gpt
Yeah but its like…pretty half baked
No, it’s just not something exposed to you to see
But under the hood it very much does shift gears depending on what you ask it to do
It’s why gpt can do stuff now like analyze contents of images, basic OCR, but also generate images too.
Yet it can also do math, talk about biology, give relationship advice…
I believe open AI called the term “specialists” or something vaguely like that, at the time.