workers should demand that the AI become the CEO, the President, and the Board of Directors’ supervisor.
For any change, AI or no, why would you take out part of your existing company before confirming that the new thing works for the new role?
Often even functional companies are in effect run by rank and file people paid almost nothing who know their particular aspects of the job very well. They are managed by people who as your rank rises know less and less about the actual work that makes the company run. This works fine when nothing major changes but when you ask people incapable of doing the job to make major strategic to the enterprise that they don’t understand shockingly it goes poorly.
because for a short time this allows for wild speculation in your favour and you can collect your bonus and secure a higher paying job elsewhere before reality hits and someone else gets to clean the mess
Because they’re stupid and/or cheap. Remember the guys at the top usually got to their position through ass kissing or otherwise are bound to ass-kissers.
🤦♂️
This seems to be a lie, there are no customer service positions available on Klarna’s career page.
They also have zero US based roles available. Canada and europe only.
Now, the company says it imagines an “Uber-type of setup” to fill their ranks, with gig workers logging in remotely to argue with customers from the comfort of their own homes.
They are likely hiring through an agency to avoid paying benefits
Collectively, we as people should stop utilizing a parasitic organization. Imagine corporations not giving jobs out yet expecting people to use their service/product.
Reverse onion. I thought this was satire at first.
Now, the company says it imagines an “Uber-type of setup” to fill their ranks, with gig workers logging in remotely to argue with customers from the comfort of their own homes.
So they’re using their spectacular failure as a chance to exploit their new ‘employees’ via the gig economy.
Fuck them. They have learned nothing about respect or decency, and I hope they continue to crash and burn.
Klarna sucks balls. “We’re your friend! We help you buy things!*”
*APR 69%; yearly fee: Left limb. Firstborn children no longer accepted.
I stopped reading at Financial Tech startup. From that alone I know what kind of people we’re dealing with here.
Something about FinTech…it just attracts the worst possible people.
And how many of these Uberserfs will be located in developed countries making good salaries? None, you say?
Probably depends on the language in the target market, a lot of European languages are not that common in countries with cheap labor.
So they’re using their spectacular failure as a chance to exploit their new ‘employees’ via the gig economy.
To the ruling class this was always the true lucrative appeal of A.I. and is precisely why they were willing to make such massive bets on a fundamentally broken technology.
The cherry on top is tech work used to be a threat to big businesses, especially big tech companies, because society considered tech work to be a respectable job. Big businesses/oligarchs saw this as an obstacle to destroying tech work as a decent paying career and A.I. was the perfect tool of propaganda to remove the obstacle because even most tech workers bought the lies hook line and sinker.
Now, the company says it imagines an “Uber-type of setup” to fill their ranks, with gig workers logging in remotely to argue with customers from the comfort of their own homes.
Alternate headline: “Identity thieves salivating at prospects of gain unvetted positions at consumer financial company”
The buy-now-pay-later company had previously shredded its marketing contracts in 2023, followed by its customer service team in 2024, which it proudly began replacing with AI agents.
A few months after freezing new hires, Klarna bragged that it saved $10 million on marketing costs by outsourcing tasks like translation, art production, and data analysis to generative AI. It likewise claimed that its automated customer service agents could do the work of “700 full-time agents.”
As Siemiatkowski told Bloomberg, “cost unfortunately seems to have been a too predominant evaluation factor when organizing this, what you end up having is lower quality.”
Also, just want to recognize this gem:
Though executives in every industry, from news media to fast food, seem to think AI is ready for the hot seat — an attitude that’s more grounded in investor relations than an honest assessment of the tech — there are growing signs that robot chickens are coming home to roost.
Klarna: Fire Now, Pay Later.
I’m sure the conclusion of this will be “AI bad” like usual when in reality a complete idiot with no understanding of AI was leading the project.
AI will replace part of our jobs whether people like it or not. But the CEO of the business is a moron so he did his special move and replaced people instead of tasks.
Every CEO thinks like this. CEOs are so incredibly bullish on AI BECAUSE they want to replace people and not tasks.
AI in the hands of moron CEOs is bad. (And they’re all morons)
LLMs can be useful in extremely limited circumstances. The problem is that idiots like this are going to use them to replace employees and consumers will receive worse products and services because of it.
AI of some kind will replace parts of our jobs, but LLM chatbots won’t except in some specific cases. This is just a hype bubble.
AI can be a useful tool and I think it will slowly become more common in the workplace, for example it can be very convenient for knowledge retrieval, but it’s laughable to think that it can replace humans. I’d wager any time “AI” can replace a human the job could’ve already been automated through other means.
LLMs are absolute garbage for knowledge retrieval.
Generalized LLMs like ChatGPT are. If you train a model on your own documentation then all it “knows” is what is in the docs and it can perform very well at finding relevant results. It’s just kind of a context-aware search engine at that point.
The problem again is that companies mostly aren’t doing that, they’re trying to replace humans with ChatGPT.
Except that your context aware search engine would tell you when there is no result and AI will just make shit up and distort the results it did find.
It’s not true.
Vector dbs and LLMs are really powerful at knowledge retrieval.
See notebooklm and open-source alternative.
They are “media transformers” and might be useful if limited to it.
Knowledge retrieval certainly not, as “they” know nothing besides how likely one data fragment is to appear near other data fragments.
Perhaps I’m using the wrong terminology. But being able to ask in natural language “why is something the way it is” and it returns references to code, bugs, and documentation along with a small summary is pretty cool. It works better than any of the half-baked corporate search engines I’ve used before. Is this not “knowledge retrieval”? In any case I can see the utility.
I’ve tested it with both python and Cisco iOS pretty thoroughly and it very convincingly gets things wrong a lot.
The problem with that is the constant hallucinations and complete lack of correctness checks.
Sure, you can’t trust LLMs and just copy-paste whatever comes out of it. But it’s very effective as a way to find something in very large mixed datasets when you may not know which exact keywords to use for a traditional search engine.
So your overall point is that AI is a better search engine. “It’s like google, but better.”
This is both likely true, and no where near fantastical enough to justify the trillion dollar hype cycle.
Yeah the AI hype levels are insane, but at the same time I think there is some interesting and actually useful technology there. That’s my 2c anyway.
The search thing is specific to internal data sets btw. Anyone who has used intranet search engines at large companies would probably relate just how terrible they are. Much worse than Google is at searching the internet.
Anyone who has used intranet search engines at large companies
Sharepoint search functionality comes to mind. Our team commonly refers it as write-once storage as once you throw something in there you’ll never find it again. And yes, we stole the term from somewhere.
“Fixes companies internal documentation” is actually a huge get for AI, and would be worth some real hype, but yeah.
That’s still peanuts compared to the marketing, which is why people are getting pretty tired of the whole AI push. The actual, incremental improvements are being run over roughshod by snake oil salesmen.
So at best it might be useful for hinting.
Eh. It’s useful for finding what I want to know. The result to a query which goes like “Based on this paragraph from some documentation written in 2005 (link) the answer is <bunch of generated text rehashing the information I wanted to find in the first place>” is a whole lot more useful than “Here is a list of thousands and thousands of irrelevant and incoherently sorted results, of which one is probably what you were looking for. Good luck.” which was, unfortunately, the state of the art up to this point.
When I try that, I just get confidently incorrect answers.
Is it because they realized that you can’t pick on an LLM?