I'm an NLP data scientist by trade and i can attest this is a perfectly accurate concise summary of what's been happening :) Great vid! Also misinformation detection is it's own area of research inside NLP, lots of papers.
@icanfast
Жыл бұрын
Thx for the upvotes. Please hire me. The username on linkedin is the same. Yes, I'm that desperate.
@theorixlux
Жыл бұрын
@@icanfast respect for the hustle, but maybe consider not having your professional username be icanfast
@icanfast
Жыл бұрын
@@theorixlux yeah maybe you're right. but it's the same literally everywhere so it makes sense from this point of view.
@StoutProper
Жыл бұрын
"The foundations of totalitarianism lie in the misuse of language to manipulate the truth, and humanity's tendency to be subject to such manipulation. The further a society drifts from the truth, the more it will hate those who speak it.” George Orwell
@StoutProper
Жыл бұрын
@@icanfastdo you though?
@HeortirtheWoodwarden
Жыл бұрын
Love to see content from the Thrive soundtrack guy.
@squa_81
Жыл бұрын
3:39 And then proceeds to make an error in it's output :) I love how french is gutted by the English speaking world, and now by AIs!
@giantenemycrab5596
Жыл бұрын
Is this getting added to thrive
@drdca8263
Жыл бұрын
If the bad actors have access to the model, couldn’t they just use it to evaluate different possible PR output, and only publish a PR output if the model OKs it? And, even if access to the actual model is kept private/limited (e.g. highly rate-limited) the bad actors could train their own model in roughly the same way, and if they used that model to evaluate their potential PR outputs , that might work well enough to be approved by the other model?
@tipx2master788
Жыл бұрын
Great video from one of my favourite 'tubers!
@paulrichardson2554
Жыл бұрын
Never thought about using AI that way.
@alexhenson
Жыл бұрын
Hey. Have you checked the game rain world? I feel it might inspire some ideas for thrive ^_^
@miradrgn
Жыл бұрын
i have some doubts about how effective this particular technology is for this particular solution. like someone having a cool new star-shaped peg and being so eager to use it they're trying to force it into a triangle hole. companies can and do say whatever they want that they think will make them look best, they can and do make and then happily break promises; press releases and websites mean rather little toward the actual impact a company has. evaluating the 'sentiment' of a vast complex social-economic-technological machine whose purpose is to generate profit at any cost (even if individuals who comprise that web genuinely want to impose restrictions on the harm it can do) based on their press writers' output feels like singling out one member of the cloud of hornets stinging you to death and asking it how they feel about you. and if we use these sentiment ratings to influence public opinion of companies, let alone policies and tax breaks and such, this risks incentivizing companies not to make material change to their actions, but to hire writers who are best able to give what the algorithm wants it seems to me that the much more effective way to hold companies accountable is to look at the things they actually do. carbon emissions, pollutants produced, acres of rainforest cleared, tons of chemical pesticides used, even economic factors like how much profit goes to executives and shareholders and how much workers are paid, these are all measurable sustainability metrics that can be tracked over time, and that computers don't need a massive base of complex training data to simply interpret at a base level. though i'm sure machine learning could, if used properly, provide additional insight into what those numbers mean and even help unravel the ways companies try to obfuscate those numbers (such as outsourcing their "dirty work" to companies in countries with less strict regulations) of course, We Can Do More Than One Thing, it's not like language-based analysis would need to fully replace all other data analysis regarding sustainability and climate change. but i do question whether it's going to have an impact tangible enough to be worth the man-hours and ingenuity and computing power being put into it. or worse yet whether the media hype about "AI," the understandable simplicity for the public and lawmakers of "the computer tells us whether companies like trees or not," and the ease companies would have in attempting to manipulate it, could catapult it into way *too* powerful a position and let companies get off scot free through effective manipulation of their publicity, rather than the actual impact of their actions. ...yknow, more than how that happens already. "the computer has determined we're telling the truth! :)" could add a level of further legitimization to lip service there is the possibility of combining these approaches; using a model that analyzes companies' language and compares that against hard data about their impact on the environment and its changes over time, and uses that to determine what genuinely "truthful" sustainability publicity looks like, but... that still lends itself to companies focusing on manipulating their writing to shortcut past evaluation of their impact. a company through the many parts that compose it is a big difficult-to-control black box in a kind of similar way to how neural networks are, and any opportunity they're given to game their reward function they'll take. i can already envision some corporation putting a hidden page on their website that's just a cloud of "green" keywords to try to boost their score when their site gets scraped and dumped into the algorithm still an interesting high-level video about the technology itself though! in particular the part about how networks can be frankensteined together to take advantage of existing neural network bases without repeating all the training from scratch helped shed some light for me on how more complicated and specific machine learning applications work, and why when looking into these things you see so much "model X is based on model Y which is a specialized version of model Z" and such
@nahometesfay1112
Жыл бұрын
A very big problem with this particular application of AI is that the people writing press releases aren't necessarily the same ones making the decisions. How correlated are press releases with actual impact? I would expect an AI to use undesirable metrics, for example assuming a person with the last name Rockefeller would prioritise profits over sustainability.
@brazni
Жыл бұрын
I don't think your example in particular would be an issue, but the general objection is at the heart of the issue with this. As only the text data is analysed, only the information contained therein can be used to produce the output.
@nahometesfay1112
Жыл бұрын
@@brazni It would be funny if my example was actually true, but I genuinely think that a pre-trained AI could be looking at this text with a lot of biases. This isn't all bad, the AI might already have information about the particular people being mentioned in a press release. It might consider company nationality because some countries have tighter regulations on fraud. In the end AI is just as subjective as humans
@Straline.
Жыл бұрын
@@nahometesfay1112 yep. It's really hard to teach someone something and not teach them bias aswell.
@nahometesfay1112
Жыл бұрын
@@Straline. We actually want to teach bias in this case because the AI is supposed identify subtext "or read between the lines" this is inherently subjective. As far as I can tell, to glean information beyond the definition of the words would require some kind of subjective bias
@aaAa-vq1bd
Жыл бұрын
@@nahometesfay1112 that doesn’t even begin to solve the problem
@hudsonjones4090
Жыл бұрын
Maybe we *should* create an AI that reads corporate action towards pollution. Let’s keep it small and just focus on the air pollution part of the problem. We’re gonna need a name for our clear-sky AI, so I’m thinking… … Skynet?
@happmacdonald
Жыл бұрын
1: detecting green sustainability (simply a specific class of what AI safety researchers call "value alignment") of companies by using AI analysis of their press release text doesn't seem nearly as powerful as if we were surveying metrics the companies have less control over: such as tax reports, transactions, investments, and movements of money in general. There are 2 ways that companies can bypass your AI text detector. 1. (the simplest) they get a copy of the model you are using, and practice their press releases against that until they can slip what they want by the censors. Of course they could practice either humans and/or more text-generating AIs in a maze like that. 2. (unavoidable) With zero direct knowledge about your model, your model can still amplify artificial selection such that companies who *accidentally* say just the write things to fool the AI are rewarded, thus the behavior of fooling the AI is rewarded. AI Safety research is traditionally leveled at an actual AI: How do we keep this AI from getting trained to be a bad actor and other unforeseen consequences? But I see the findings from this field as quite applicable to any agent. Humans, Animals, human organizations like governments and corporations, policy networks, etc.
@michaelvaller
Жыл бұрын
Finally a new video, I love it ❤️
@theultimatereductionist7592
Жыл бұрын
Or, as I've called it since for ever, TESTING LOGICAL CONSISTENCY OF WHAT COMPANIES SAY VERSUS WHAT THEY DO.
@David_Brinkerhoff93
Жыл бұрын
So I build an ai specialized in tricking the ai tasked to analyze my company's text. Checkmate.
@wastucar8127
Жыл бұрын
Wonderful video, the final bias part was great. I definitely feel like it’s quite notable how little literature there is about alternatives to our current capitalist system when it comes to global methods of economy. My main worry there is if the AI would become like the average US citizen living in ‘The End of History’ - thank you mark Fischer.
@gokce9521
Жыл бұрын
Wouldn't this just incentivise companies to try and present themselves as sustainable for the AI rather than try to actually be sustainable? That is also assuming that we already have universally agreed idea on what a sustainable company looks like for the AI to collect data on in the first place. Also are we judging companies on how they present their sustainability or their actual production/consumption data? If we have data that is good enough for an AI to pass judgment on a human brain and basic maths is already enough to decide if they are sustainable.
@Howtheheckarehandleswit
Жыл бұрын
As for "That is also assuming that we already have universally agreed idea on what a sustainable company looks like for the AI to collect data on in the first place", that is in fact the thesis of the video. And yes, while it would incentivize companies to figure out how to trick the AI, the idea is that this type of AI is very difficult to trick, and learns faster than humans do. And, as for "If we have data that is good enough for an AI to pass judgment on a human brain and basic maths is already enough to decide if they are sustainable." The point of AI is to take the companies where we do have the data needed to judge them easily, feed their marketing into the machine, along with an answer key based on those (presumably leaked) easy numbers, and have the computer figure out how to tell the difference between the liars and the genuine green companies just by looking at their marketing. THEN, you apply that same model to other companies you don't have good data for.
@unepintade
Жыл бұрын
did you watch the full video ?
@pihlajafox
4 ай бұрын
Coconut dinosaur eats fish greatly Im really struggling to come up with nonsense here Surely more people must have tried pausing by now I bet you all feel really validated But you really wasted your time here (...fuck)
@stevenandersen6989
Жыл бұрын
Now what about the next video, can AI predict when the late stages of Thrive will come out?
@bashful228
Жыл бұрын
"Thanks for my money maker for making this video possible". Are you saying you wouldn't put content on KZitem without having a sugar daddy? It would be impossible to produce this content without a sponsor? sure.
@edwardmighetto7327
Жыл бұрын
Yo I do transformer research, and I've never seen someone actually mention them instead of just saying 'neural networks' and moving on haha
@Houshalter
Жыл бұрын
Wow, feels like something China or the Soviet Union would do. Crazy to see this kind of totalitarianism come to the West in my life time.
@totally_innocent1072
Жыл бұрын
Animated???????
@audiosurfarchive
Жыл бұрын
I am terrified of continuing to exist in this ever growing hellscape. I need to move from the Gulf Coastal Plains before they're a permanent Binary Domain submerged city.
@firoziSukh
8 ай бұрын
I mainly discovered you channel because I AM having a special interest moment with Issac Asimaov's Foundation but now I just want to hear you talk... it's soothing. You're just on in the background as I study for my "Counselling Theories and techniques for Groups and couples" Paper...
@jordansmith3809
Жыл бұрын
You could be far more critical
@golgarisoul
Жыл бұрын
I almost didn't watch this video. The Clipart ass thumbnail didn't scream the kind of sensibilities I associate with your... uh... average output? But I'm glad I found this so quickly after not reading the uploader when i got the notification
@ants_12
Жыл бұрын
First
@eatingpancakesrightnow2786
Жыл бұрын
Your videos are interesting and very dense, and I see now I'm only really going to be able to follow some of them during my 12 hr shift
@lauchlanbagley1934
Жыл бұрын
Balling
@bobganky6240
Жыл бұрын
You and Soup Emporium have been my favorite up coming channels
@THE_ONLY_REAL_WAFFLE
Жыл бұрын
Nice
@kekcrocgod6731
Жыл бұрын
Thank you for stating this is a sponsored video at the start
@General12th
Жыл бұрын
Hi Oliver! I love your animations!
@jarvithink3190
Жыл бұрын
Good Video! take some engagement❤❤❤
@sebastianabarza26
Жыл бұрын
Here we go
@r3dp9
Жыл бұрын
What I already see - with no AI involved (yet) - is organizations will consist of people who simultaneously believe and preach one story (ie, 'we're the good guys') while simultaneously exacerbating the exact problem they supposedly stand against. (Ie, sustainable factory, but you ship non sustainable products, or you ship sustainable products only for them to be used unsustainably by the customer, or only your toilets are sustainable, etc.) One of the core aspects of the problem - again, before AI gets involved - is that we have a culture of believing that opinions matter more than actions. If I'm nice to a , seek their professional advice, and respectfully avoid potentially controversial topics, I'm still considered the "enemy" because my "opinion" is that has a screw loose. Whether my opinion becomes known or not is beside the point, I'm already considered the "enemy" just by not being an "ally", by which they mean that I must pander to and unconditionally agree with The propaganda I'm fed is very clear on this. The greatest tell of the GW movement is that they spend more money on technologies known to be nonfeasible with our current technology (solar, wind), while completely dismissing technology that was feasible decades ago (nuclear). At worst, they should consider nuclear as an interrim power source. It's significantly better than oil or coal in terms of emissions (and worker deaths) per watt of power. It's even green! But they don't want solutions, they want problems, they want a war, and they want the "temporary" emergency powers you get from war. I could go further and say I don't believe in AGW at all, but even if AGW was true, the Green movement would STILL be counterproductive to their stated goal.
@Iron_uksus
Жыл бұрын
Waffle house has found its new host
@StarshadowMelody
Жыл бұрын
"to writing questionable harry potter novels" you mean the ones that actually got published, like Chamber of Secrets?
Пікірлер: 53