Hacker Newsnew | past | comments | ask | show | jobs | submit | lbrito's commentslogin

>One could also say that TV makes humans angry and violent

It does

Anderson and Bushman (2002) https://www.researchgate.net/publication/11440811_The_Effect... Evidence is steadily accumulating that prolonged exposure to violent TV programming during childhood is associated with subsequent aggression.

Paik, H., & Comstock, G. (1994). The effects of television violence on antisocial behavior: A meta-analysis. Communication Research, 21(4), 516–546. https://doi.org/10.1177/009365094021004004 Results showed positive and significant correlation between TV violence and aggressive behavior

Ironically I used Gemini to look those up. Being a social studies thing, of course there is no absolute proof of this, there are many caveats and ways of looking etc.

Tangential - "find meta-analysis to back up my point" is ridiculously easy with AI, and it can be used on both sides. I could just as easily negate the ask and get compelling results.

I would hate having to write a dissertation right now.


I think you're agreeing with me. My point is that TV does not inherently induce negative emotions, but the content of it can. Similarly, AI content does not have to do the same, but poor quality AI content can.

Yeah. More importantly though, AI seems to be a novel way to pry open the crazy out of some people, with sometimes disastrous results.

Or putting it more charitably, some people seem to be more vulnerable, for whatever reason, to multiple different kinds of mental breakdowns (like the psychosis described by the "artist" "victimized" by this "crime").

While I personally don't get it (how some people are so entranced by AI as to have mental breakdowns), it does seem to be a thing, with some catastrophic results[1]. Granted in some cases the persons involved had prior serious mental health issues, that seems not to always be the case. In other words, be it not for AI, those people could reasonably have expected to live normal lives.

[1] https://en.wikipedia.org/wiki/Deaths_linked_to_chatbots


You would not be disagreeing with me, actually. I should have clarified that the problem is somewhere in current implementations of generative AI(Google Transformer derivatives), in my opinion, and is not necessarily the case to every shape and form of AI.

But nearly every single implementation of generative AI data generators appear to exhibit this behavior, with Google Nano Banana(tm) implementation as potential sole exception or lesser offender. Something in it is rage and/or derangement coded, NOT in artistic way that rock or metal music recordings are. Maybe this was what supposed "toxicity" of LLMs discussed heavily as chatbots rolled out remedied by extreme sycophancy to the point that LLMs don't literally flip out people and drive them into state of psychosis. But whatever it is, it's insane that everyone supportive of AI is tone deaf on a phenomenon that obvious, reproducible, and widespread.

All it takes to turn anyone into anti-AI Luddite is to show them a piece of text, image, code, any data that they are familiar with. That's not a simple moral panic.


He just chewed them and spat them out

Not all of them lol

>There very much is still time to significantly control it + regulate it.

There's also huge financial momentum shoving AI through the world's throat. Even if AI was proven to be a failure today, it would still be pushed for many years because of the momentum.

I just don't see how that can be reversed.


Its being downvoted because HN has a very active billionaire-techbro-fanbase.

Also who's this Dario?


Presumably Dario Amodei, CEO of Anthropic.

It's being downvoted because it's a ridiculous premise. "The Elites" are human too. This attitude is nonsensical and child-like. Nobody is out here trying to round up the hippies and force them to live in some kind of pods to be harvested for their nutrients or whatever.

This technology, like every prior technology, will cause some people to lose their jobs and some new jobs to be created. This will annoy people who have to learn new skill instead of coasting until retirement as they planned.

It is no different than the buggy whip manufacturers being annoyed at Henry Ford. They were right that it was bad for their industry, but wrong about it being the death of... well all the million things they claimed it would be the death of.


Henry Ford didn't make his cars out of buggy whips. He made a new industry. He didn't cannibalize an existing one. You cannot make an LLM without digesting the source material.

> He made a new industry. He didn't cannibalize an existing one.

I don't see how you can claim the second part is true. Cars directly cannibalized other forms of self transportation.


? Cars don't "eat" horses. I wouldn't equate "making redundant" with "consuming"

LLMs don't literally eat artists. I think you understood the metaphor.

Cannibalizing a <product/industry/etc.> is a common phrase to describe the act of a new thing outcompeting an existing thing to another thing to the degree that it significantly harms the market share, sometimes to the point of figurative extinction. Redundancy is a very common reason for this to occur.

It has nothing to do with literally eating.


Digesting is a weird way to say "learning from." By that logic I've been digesting news, books, movies, songs, and comic books since I was born. My brain is great big 'ole copyright violation.

What matters here is not the source material, it's the output. Possessing or consuming copyrighted material is not illegal, distributing it is. So what matters here is: Can we say that the output is transformative, and does it work to progress the arts and sciences (the stated purpose of copyright in the US constitution)?

I would say yes to both things, except in rare cases of bugs or intentional copyright violations. None of the major AI vendors WANT these things to infringe copyright, they just do it from time to time by accident or through the omission of some guardrail that nobody had yet considered. Those issues are generally fixed fairly promptly (a few major screw ups notwithstanding).


So we have monkeys writing the same code over and over again, until the end of time. Because of "rules".

And for those of us living a reality of subjugation and fear, you're a fucking liar.

And just like Henry Ford and the automobile, one of many externalities was the destruction of black communities: white flight that drained wealth, eminent domain for highways, and increased asthma incidence and other disease from concentrated pollution.

Yet, overall it was a net positive for society... as almost every technological innovation in history has been.

Did you know the 2/3rds of the people alive today wouldn't be if it hadn't been for the invention of the Haber-bosch process? Technology isn't just a toy, it's our life support mechanism. The only way our population gets to keep growing is if our technology continues to improve.

Will there be some unintended consequences? Absolutely. Does that mean we can (or even should) stop it? Hell no. Being pro-human requires you to be pro-technology.


I don't think this argument is logically sound. The assertion that this (and every other!!) technological innovation is a "net positive" merely because of our monotonic population growth is both weakly defined and unsubstantiated. Population is not a good proxy for all things we find desirable in society, and even if it were, it is only a single number that cannot possibly distinguish between factors that helped it and factors that hurt it.

Suppose I invent The Matrix, capable of efficiently sustaining 100b humans provided they are all strapped in with tubes and stuff. Oh and no fancy simulation to keep you entertained either -it's only barely an improvement on death. Economics forces everyone into matrix-hell, but at least there's a lot of us. Net positive for society?


Human fecundity is probably not actually the meaning of life, it's just the best approximation most people can wrap their heads around.

If you can think of a better one, let me know. Be warned though, you'll be arguing with every biological imperative, religion, and upbringing in the room when you say it.


"as almost every technological innovation in history has been"

This is simply false. You really are the king of making unfounded claims.


I don't need to prove anything. You folks are the ones claiming harm. That said, AI is more akin to the invention of antibiotics than it is to the invention of any specific drug. Name any other entire category of technology from which no good has ever come. Just one.

I doubt you can. Even bioweapons led to breakthroughs in pesticides and chemotherapy. Nukes led to nuclear power, and even harmful AI stuff like deep fakes are being used for image restorations, special effects, and medical imaging.

You're just flat out wrong, and I think you know it.


You are speaking in tautology. Yes we know that technology investment often leads to great advancement and benefits for humanity, but it is not sufficient to obviate the need for consciousness and reduction of harm. This technology will be used to disenfranchise people and we need to be willing to say, "no, try again." Not to stop advancement, but to steer it into being more equitable.

We should be trying to optimize for the best combination of risk and benefit, not taking on unlimited risk in the promise of some non-zero benefit. Your approach is very much take-it-or-leave-it which leaves very little room for regulating the technology.

The GenAI industry lobbying for a moratorium on regulation is them trying to hand wave any disenfranchisement (e.g. displaced workers, youth mental health, intellectual property rights violated, systemically racist outcomes, etc).


> We should be trying to optimize for the best combination of risk and benefit

I 100% support this stance, it's good advice for life in general. I object to the ridiculous Luddite's view espoused elsewhere in this thread.

>The GenAI industry lobbying for a moratorium on regulation is them trying to hand wave any disenfranchisement (e.g. displaced workers, youth mental health, intellectual property rights violated, systemically racist outcomes, etc).

There must be a balance certainly. We can't "kill it before it's born", but we also need to be practical about the costs. I'm all in on debating exactly where that line should be, but object to the idea that it provides no value at all. That's madness, and dishonesty.


It's because people rub shoulders with tech billionaires and they seem normal enough (e.g. kind to wait staff, friends and family). The billionaires, like anyone, protect their immediate relationships to insulate the air of normality and good health they experience personally. Those people who interact with billionaires then bristle at our dissonant point of view when we point at the externalities. Externalities that have been hand waved in the name of modernity.

Sycophancy is for more than just LLMs.


>If the argument is sustainability of training, I'm skeptical we need these payment models.

That seems to be the argument: LLM adoption leads to drop of organic training data, leading LLMs to eventually plateau, and we'll be left without the user-generated content we relied on for a while (like SO) and with subpar LLM. That's what I'm getting from the article anyway.


There are so many things wrong with the points this article repeats, but those are soundbites at this point so I'm not sure one can even argue against them anymore.

Still, for the one about organic data (or "pre-war steel") drying out, it's not a threat to model development at all. People repeating this point don't realize that we already have way more data than we need. We got to where we are by brute-forcing the problem - throwing more data at a simple training process. If new "pristine" data were to stop flowing now, we still a) have decent pre-trained base models, and a dataset that's more than sufficient to train more of them, and b) lots of low-hanging fruits to pick in training approaches, architectures and data curation, that will allow to get more performance out of same base data.

That, and the fact that synthetic data turned out to be quite effective after all, especially in the latter phases of training. No surprise there, for many classes of problems this is how we learn as well. Anyone who has experience studying math for maturity exam / university entry exams knows this: the best way to learn is to solve lots of variations of the same set of problems. These variations are all synthetic data, until recently generated by hand, but even their trivial nature doesn't make them less effective at teaching.


>We got to where we are by brute-forcing the problem

This has been a bit of a concern of mine. That we have to do things the hard way for a long time, and in doing so make a massive amount of fast hardware. Then we get some breakthru that massively drops the amount of compute necessary, the surplus we suddenly have may lead to some kind of AI capability explosion.


The article gets the part about organic data dying off right. Look at Google SERP's for an example. Almost nobody clicks through to the source anymore, so ad revenue is drying up for them and people are publishing less or publishing in places that pay them directly and live behind a paywall like Medium. Which means Google has less data to work with.

That said, what it misses is that the AI prompts themselves become a giant source of data. None of these companies are promising not to use your data, and even if you don't opt-in the person you sent the document/email/whatever to will because they want it paraphrased or need help understanding it.


>AI prompts themselves become a giant source of data.

Good point, but can it match the old organic data? I'm skeptical. For one, the LLM environment lacks any truth or consensus mechanism that the old SO-like sites had. 100s of users might have discussed the same/similar technical problem with an LLM, but there's no way (afaik) for the AI to promote good content and demote bad ones, as it (AI) doesn't have the concept of correctness/truth. Also, the old sites were two-sided, with humans asking _and_ answering questions, while they are only on the asking side with AI.


> (AI) doesn't have the concept of correctness/truth

They kind of do, and it's getting better every day. We already have huge swatches of verifiable facts available to them to ground their statements in truth. They started building Cyc in 1984, and Wikipedia just signed deals with all the major players.

The problem you're describing isn't intractable, so it's fairly certain that someone will solve it soon. Most of the brightest minds in society are working on AI in some form now. It's starting to sound trite, but today's AI's really are the worst that AI will ever be.


“ Most of the brightest minds in society are working on AI in some form now.”

Source? I haven’t met one intelligent person working on AI. The smartest people are being ground into dust. They’re being replaced by pompous overconfident people such as yourself.


> I haven’t met one intelligent person working on AI.

I get the impression that you don't meet a lot of people in general.


> 100s of users might have discussed the same/similar technical problem with an LLM, but there's no way (afaik) for the AI to promote good content and demote bad ones, as it (AI) doesn't have the concept of correctness/truth

The LLM doesn't but reinforcement does. If someone keeps asking the model how to fix the problem after being given an answer, the answer is likely wrong. If someone deletes the chat after getting the answer, it was probably right.


AI is an entropy machine.

Those AI prompts that become data for the AI companies is yet another thing that the human creators used to understand what people wanted, topics to explore, feedback on what they hadn't communicated well enough. That 'value' is AI stealing yet more energy from the system resulting in even less/less valuable human creation.


>The US was both back in the 40s as well as the late 80s when Germany and USSR crumbled. So hail to Pax Americana.

This is the historical equivalent of selective memory, and only really applies to a tiny slice of the planet - western Europe.

A lot happened in the world between 1945 and 1989. Outside of Europe and Japan, most people would probably not be so sympathetic of the US actions during those times. An abridged list:

* Iran 1953 - US and UK overthrow democratically elected PM, install brutal dictator

* Guatemala 1954 - US via CIA overthrows democratically elected president, install brutal dictator

* Brazil 1964, Chile 1973, Argentina 1976 - US supports brutal dictatorships replacing democratically elected governments

* Iran/Iraq 1980s - US funds both sides in the war

etc. This is a very resumed list.


Yeah that's the killer part I think. For EU and Japan/SK it's mostly the giver but there were also killer events like Operation Gladio.

>Get in contact with current employees at the company. It is important that you send more than one email. I've gotten dozens of emails asking for meetings and referrals. The only time I actually respond to these is after the second email.

Please, no. Go through the proper channels like everyone else. If you have a referral - great. Otherwise, DON'T spam current employees you randomly find on linkedin or whatever. I get those from time to time and ignore 100% of them.


Some people don't though, and their referrals are much more valuable than dozens of additional applications.

Much like many other decisions you're making in the job market, it's a polarizing choice that increases your overall chances when the alienated class of people isn't too large. If 90% of people ignore those emails but your chance of getting a first-round interview goes up 5x compared to a cold application when the remaining 10% respond, 2+ emails are easier to send than 1 application, especially when you've done the legwork to make your application any good.

I haven't used techniques like these specifically yet, but as somebody who nearly always eventually gets the job once I've had a first-round interview, I wouldn't be opposed to seeking out the hiring manager and contacting them directly to decrease the resumé false rejection rate.


The flaw in that thought is that doing this nagging is thinking about it as a zero-effort thing you can do to increase your odds. It is not zero-effort. You (candidate) will have to expend time looking up people and messaging them.

There are more effective ways of spending your time than that.


>There are more effective ways of spending your time than that.

I'm not sure in this day and age. Your goal is to get in front of a human, and that's harder than ever. If you spend hundreds of applications with no response, even a "You're blacklisted" response from a human will feel better than the cold neglect of today.

It's not zero effort, but I argue it's less effort than being months into your search and trying to find another 20 companies that you seem to be qualified for. Having the human element can also be encouraging too instead of the 50th dang workday application.


Soooo... My guess is the next step in evolution of automated resume submission apps will be looking up hiring manager (or above) and sending a tailored email to "decrease false rejection rate". Can we expect a "Show HN" post with this feature soon?

There are a number of tools out there that help map who you may know at a target firm or may know someone at the firm. I think it's critical that you handcraft craft a thoughtful personal email and not generate a flood of "fake personal." It's also a good idea to dig your well before you need the water and help others find work in what is a very challenging economy. My $.02 your mileage may vary.

Don't spam. Don't be an asshole. And do everything you can to get that thing you want vs. spraying and praying for some cubicle job.

Exceptional folks rarely limit themselves to pushing submit to upload their CV. That proper channel is a lottery, and they know it. It's a broken system that requires hustle to increase the odds.


That's a you issue. Not everyone is asocial.

If your idea of socialization is the transactional exchange of emails between yourself and someone who happens to work in a company you might be interested in also working, then most people would agree you have a very peculiar definition of socialization.

It's not transactional. You are assuming everyone applying is trying use you. It's sick.

>Also what's up with the people hiking (by themselves) with a bluetooth speaker. You're by yourself, in nature. If you want to listen to music wear headphones!!

I'm baffled by this too, but I think some people get accustomed to just having a soundtrack around them at all times, like they're living in a Hollywood movie. It gets to the point where they actually sleep with something always on (in the old days that would be a TV, not sure today. Probably a podcast)


People blasting awful music any time of the day or night, anywhere (neighbours, beachgoers, public park, transit) is enough of a problem in my country (Brazil) that arduino/Raspberry Pi/ESP32-based bluetooth jammers are somewhat common.

I would never try to use it though, as you can very realistically get killed in retaliation.


How could you get in trouble (aside of this probably being illegal, at least I know it is in my country)? How would people know that you are jamming the signal, and not someone else?

Non-asshole-seeming people tend to be, unbeknownst to themselves, conspicuous in these scenarios

Is "fat uncle" a slang I don't know about?

In some Asian cultures, "uncle" can be used to refer to any man older than yourself.

Is uncle just old unmarried guy?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: