AI Unveiled: Ethics, Innovation, and the Human-AI Symbiosis

AI, autonomous vehicles and more in this new episode of the What's AI podcast!

AI Unveiled: Ethics, Innovation, and the Human-AI Symbiosis


Welcome to a fresh episode of the "What's AI Podcast" with me, Louis-François Bouchard. I'm excited to bring you an amazing discussion with Jérémy Cohen from Think Autonomous. This episode holds a special place for me as we explore the intriguing practical and ethical layers of autonomous vehicles. Our conversation goes beyond just the technical aspects, diving into how AI shapes crucial decisions in transportation, and the implications for human involvement and responsibility.

Alongside this, we talked about AI's far-reaching impact across diverse sectors like healthcare and finance, examining the evolving world of AI startups and the balance between AI innovations and human skills.

This episode introduces a new format, sparking debates around AI, which I'm sure you'll find engaging. Whether you're an AI professional, a student, or just curious about technology's future, this episode offers insights and new perspectives!

So, join us for a cool discussion on AI, autonomous vehicles, and their transformative effects on society. Don't miss out on this chat about AI's future and its societal impact!

You can catch the episode on YouTube, and if you're a Spotify or Apple Podcasts user, be sure to follow the "What's AI" podcast by Louis-François Bouchard:

Full transcript

Jérémy Cohen: [00:00:00] And if I ask the AI to just verify every single one of my articles or my emails, they will always be very neutral. And so very boring in a way, and you have like control entry, then you have need. So there must be a need. And then there is scale and time. And the thing is that when all these startups.

They had two main problems. The first was the control. So they completely delegated their control to open AI. So basically if they want to shut down the API tomorrow, but all this time just cease to exist, like, like Thanos, it's, it's over.

Louis-François Bouchard: This is Louis from What's AI and here's the second episode. I received Jeremy Cohen, founder of Think Autonomous, a platform to learn about autonomous vehicle. And he also has a daily newsletter, sharing lots of insights about AI. Your career, self driving cars, and more. The format of this [00:01:00] episode is a bit special.

Here, I highlighted a dozen of questions and debate to talk about artificial intelligence in general. So you will learn a lot of cool things related to chatGPT, artificial intelligence, the future of your own job, how to stay relevant in the AI era, how to cope with hallucinations, I'm sure you'll love this episode.

If you do, please don't forget to leave a like and a five star review, depending on where you are listening to this episode. Thank you for watching. How did you get into the field and what are you doing now? 

Jérémy Cohen: I got into the field by accident. Actually. I got a diploma in engineering and it was IOT. So internet of things.

So everything's smart, connected objects in 2017, 2016, that was like super hypey. And. And so I worked on that and I had an internship in a consulting company doing, that they told me like, we have a mission on smart. I think it was like fire alarms or something like that in the [00:02:00] house. And so that was perfect for me.

I just wanted to go and try my Bluetooth skills on that. And, and the problem was that the mission failed or there was something like a client. Decided that we were not a good fit. so we were an entire team. And so I got, I got out of this project after like only a month and they told me, okay, so we don't have anything IOT for now, but we have an AI mission coming and it's going to be with a bank and you're going to classify banking emails.

And that was something like, I had no idea what it. Would be about the idea was that the bank is doing some chatbot. And so when you are stuck abroad, you have your credit card blocked or something like that, you just, send an email to a chatbot or send a chat and then they unlock the card or something like that.

And so that was the discovery of AI. And then like a few years in the future, I kept learning about AI. I learned more about how to fuse AI with IOT. [00:03:00] And I found that the best fusion of that was robotics and self driving cars, because it's like a physical thing. There is still the physical and the intelligence.

And so I started working on the idea of becoming a self driving car engineer. And In a nutshell, I became a self driving car engineer and I worked on autonomous shuttles for a couple of years. I worked on computer vision as well. And today I'm, I built a company that's helping engineers and companies build self driving cars.

So Helping engineers mean mostly helping them becoming a self driving car engineer. I have a daily newsletter that people read. We have over 10, 000 engineers inside. And there is, the companies who also need help building their own algorithms. And so, we also do that. So that that's the introduction in short.

And, yeah. 

Louis-François Bouchard: Awesome. So maybe we can start with the, the topic [00:04:00] that I have related to your, your expertise. And it's, are autonomous vehicles really the solution to our current transportation issues in the various countries?

Jérémy Cohen: Okay. So, I think not necessarily. when I started with autonomous vehicles, the promise was that it would solve the accident problem and it will solve the traffic congestion problem.

And, and, and, and the pollution as well. And so everything that derives from that today. So several years later, I would say the accident probably, it's probably a good solution when you have like these Adas features and, emergency breaking. We saw how it helps and how it will keep helping. but I don't think like I, I got into several, Autonomous vehicles, including when [00:05:00] back in January, I was in San Francisco. So I could experiment with a lot of these, opening, they were, they were driving completely in the open. And I noticed that it does not necessarily reduce traffic. It's just another car. there is not this idea of having the car better.

Oh, well, so maybe the accident, the reduction of accidents will decrease traffic, but there is still the congestion problem. I don't see it solved by self driving car. I would see it solved by, better infrastructure inside cities, probably. Subways, but better, you know, stuff like that. Work on that. Most more work on that would be probably better.

Louis-François Bouchard: What do you think? Well, if all cars are autonomous, wouldn't that solve traffic? Like, it would be super efficient and optimized, I guess, that

Jérémy Cohen: So maybe if we have lanes where it's only autonomous, you know, when you have bus lanes, we would have bus plus self [00:06:00] driving or just self driving lanes. Maybe. But it would mean there is some sort of entity that controls traffic, you know, when they do with reinforcement learning games, I don't see that happening.

It's still like, that would be decades in the future. Like we're talking, like if I buy a car now, which like people still buy cars today by a lot, and it's not autonomous, most, some people are going to keep it for like four years. So they are still going to be here. for years in the future, unless they're banned or something.

But that's the idea. 

Louis-François Bouchard: But do you never see this happen or just like in a very long time?

Jérémy Cohen: I don't particularly see it happening yet. I, I think as a hundred percent autonomous are, I think it would be difficult to achieve. people like to have control to drive. they, they, there would be some. walk to do on that, [00:07:00] but even there, I think autonomous vehicle should be, it should be like the solution that is offered that is better and that most people take, you know, like people sometimes today they take the subway to get to work and they don't take their car because it's just too much problem.

And that would be the same kind of deal. If you take the autonomous road, it's much faster. you have no traffic lights. You have no pedestrian. You just drive your home in 15 minutes. And otherwise you can take your car, but you will be in traffic. You will have these issues. You will have these regulations.

You can get into accidents. You can get into all of that. Maybe it will be more expensive with the insurance. and, and so, but that would still be an option, I think.

Louis-François Bouchard: Companies still working on autonomous vehicles, do you think? 

Jérémy Cohen: Like, because that, that's, that's a solution we need. That's something that we want, like this idea of the additional [00:08:00] lane.

that's incredible. If everybody, like, if you have a city where we have maybe, a part that is designed for walking and a part that is designed for cyclists in an order for autonomous vehicles, and then the rest is just, the few vehicle roads here and there, but most of the city, maybe cities would be autonomous and outside as much, you know, something like that, people think it's the opposite.

I would say more like that. I think in Paris, in Paris, we have entire roads. Forbidden to cars, like, even like boulevard and stuff like that, completely forbidden to cars. Now we either walk or cycle or maybe just have buses and that would be maybe autonomous vehicles inside of that too.

Louis-François Bouchard: Yeah, I see it much more promising for like larger transportations, like, like autonomous buses, for example, compared to just a bunch of autonomous cars, I guess, or like shuttles or things like that.[00:09:00] 

Jérémy Cohen: And trucking, like all these repetitive, things like trucking is really tiring for people, like, we can talk about job loss, but they, they also, this is not jobs that they particularly like. I don't think I'm wrong here. Like many complain every day, have shorter lifespan, all of that. So yeah, there is that too.

Louis-François Bouchard: Yeah, perfect. Let's jump into the second one. So are AI startups dead? 

Jérémy Cohen: Okay. So you mean after the open AI? 

Louis-François Bouchard: Well, like obviously this has changed, but also just in general, because there are so many, I can't get, I can't start giving my opinion.

yeah, basically there are so many startups. And I will assume like the vast majority of them are based directly on, on open AI products or like cloud or some other large language models. [00:10:00] And with very little innovation, I guess that's of course, some of them build like nice products around them and they can, be a bit different in what they bring to the table other than just prompting the LLM.

But I feel like there. Maybe still a lot of money in the, in the AI startup space, but it may lead to some kind of other like AI winter where all this, this invested money basically is for almost nothing because like open AI can release. Whatever a good company, a successful company has built, but directly on their platforms, since they are so dependent on open AI.

So I, I'm just afraid that like right now, I feel like it's a bit discouraging to start a new product or project, especially when, if it's related to artificial intelligence, just [00:11:00] because now I feel like more than ever. Bigger companies with more budget can very quickly, erase you. 

Jérémy Cohen: So I think when you say AI startups dead, we, we could define an AI startup.

If it's just calling an API, is it really an AI startup or is it a? PDF, GPT startup. And, and these are different. I actually wrote about this for my daily emails. I think it was a week ago, two weeks ago. And the gist of, of my answer was that, it was based from a book from M.J. DeMarco. I don't know if you've read the Millionaire Fastlane.

It's one of the books that got me to start into entrepreneurship. And basically it was, it was talking about five pillars to. To, to build a startup or to build a, to, to, to make it on your own, I would say, and you have like, control entry, [00:12:00] so control means you need to have control over what you're doing.

Entry is about the entry barrier. It should be low enough. So, you can do it, but high enough. So not everybody else can do it. Then you have need. So there must be a need. And then there is scale and time. And the thing is that when all these startups, they had two main problems. The first was the control.

So they completely, delegated their control to, Open the eyes. So basically if they want to shut down the API tomorrow, but all these startups just cease to exist, like, like Thanos, it's, it's over. and so that, that would be the first thing is control. So if you have something that relies on GPT, but if GPT breaks, you have barred and if barred breaks, you have another one and you also have it.

Lighter custom model in case everybody else decides not to sell anymore. Maybe that's more robust. now based on that, you also have the idea [00:13:00] of need and the need here was completely destroyed when they added the feature to the old. platform, but the thing is, when you look at, for example, iPhone and you, you had at some point in the app store, many apps for portrait modes.

So stuff like that's going to do image segmentation and then blur the background. And then that's it. They just blur the background and that gives you a nice, what they call bouquet. I think, Portrait mode in the picture and, and at some point Apple released a new feature on their phone with the portrait mode and then the cinematic mode and all of these companies, they either had to adapt and to change their offer or they just died.

I became free for the existing customers and I think it's more related to that. So if you have something that is a need that another company can easily solve, and at the same time, your entire point is [00:14:00] dependent. So you have dependency on another thing that can. Kill your control. Then you just, you don't really have a sustainable startup.

You have something that is very short term that can make some money at the beginning. At the same time, that is, it's, it's actually slow and long to generate the first customer and get money. And by the time you have something, it can die overnight. So, I would say that's my answer. 

Louis-François Bouchard: I think it's just because especially on Product Hunt and on LinkedIn, I see a lot of like new products being released where as someone in the field, I clearly see that it's literally just prompting GPT 4.

And I just wonder if this is hurting our field that so many, so many people try to create. Something like super quickly and they all, go bankrupt basically. And so is this like, can this hurt the, the like economy in the AI field where [00:15:00] like, it's, it, we lose credit like as AI researchers or as like people in the field, we, we lose credit that AI is actually either like.

Not really useful in the end, or that you rather go for directly open AI than trusting someone independent? 

Jérémy Cohen: It's more like, I think if, if people, build startups so easily and so fast and they die so easily and so fast. It could happen in any field. It's more like there is a hype around it and just everybody goes there and a lot of people lose their money and their time and and, and that's kind of it.

It's more like that could, that, that happened in a, in a way in crypto and in web tree where we have like this, this thing where suddenly everybody had a web free startup and we don't really hear these people anymore. some of them could still be walking on the [00:16:00] mission. But they realize that it's a 10 year thing.

It's not a one month thing. It's more like, okay, so if we want to last in this thing, we're going to have to build customers, build trust, build great product, blah, blah, blah. So, it doesn't hurt credibility or anything. It's more like there has been a gold rush and people try to get in. And then, and it happened also with self driving cars.

Like when I joined the self driving car world, we were tens of thousands to join at the same time. So, And then we had a lot of startup dying, a lot of people like just no longer working on self driving cars. They decided it was too hard. And then we have the people that stayed, that just keep building stuff, improving it.

And then at some point it's going to become mainstream. 

Louis-François Bouchard: Yeah, that makes sense. All right. So my next one is somewhat different and it's about the, well, it's not even recent anymore in our field, but the AI path that was suggested. [00:17:00] And and so the question or debate is that like, should the AI research focus on progress and improving or on a better control and understanding of the algorithm or whatever we are building?

Jérémy Cohen: That's, that's a tough question. especially since, I don't personally know people who build, who build large language models like that, or who have these, big control because mostly. It's like, I think it's like a few companies that run the entire thing, right? Even the other startups are acquired.

So should they focus on progress? It's not progress. It's more like, competition. And if they don't do it, somebody else will. So They have to do it and control is like, how can they do it? But it doesn't cause trouble. [00:18:00] I don't know. What do you think?

Louis-François Bouchard: I think it's like two sided. Basically, I agree with you for the companies.

They, they don't really have a choice. They, they need to, to improve and to, to go forward and just, yeah, always get better results. But at the same time, I think. that it's the government's role to invest way more in having the best AI researchers and more technical people to like quickly iterate and build better laws and like force the companies to in some way to ensure that like their model act a certain way or like All, all, all this stuff.

I think it's more from the, the government side that needs to, to invest more into control more and yeah, like that I think would, would make it a good balance between the two. 

Jérémy Cohen: So why are they not doing it? Because [00:19:00] it's, it seems obvious that government should get involved and say, and, and start giving regulations so.

Is it because it's super slow to create regulations or, or do they just not want to get there because there is something else that they don't, that we don't suspect at our level? 

Louis-François Bouchard: Yeah, I don't have the answer, but I guess it's just always extremely slow. It's, it, it always has been. And in the opposite, I think AI.

is changing like faster than anything we've ever seen. So it, we, we see it even more now. And I feel like it's, it's time for a change. Like it's now more important than ever that, that the government changes and, and act in more, in a more agile way and, and act like companies, like, like in the worst case.

Creating, laws that are too restrictive and then changing them rather than like waiting and analyzing and [00:20:00] in the end creating some law.

Jérémy Cohen: Because just for them, keeping up to date with what's happening is very difficult. I think they hire people, but still, it's like just understanding. How it works, because you need to understand how it works to put regulations on that.

Otherwise it's very complicated because you get answers and you don't understand them. Why cannot we do that? Because it works like that and it's not the way it has been built and it's an entire, issue.

Louis-François Bouchard: Yeah. Yeah. I think. Not just the technical people within the government, but also the, the people making decisions also need to understand that.

And I guess that they are not like putting in the effort to understand a bit of how it works. Like not the technical parts, but at least like have a broad understanding of. The current algorithms or, or things like basically understand that it generates one word after the [00:21:00] other, or just like to, to understand why hallucinations happen and what's, what is it really doing so that they have.

A clear idea of the fear that they should have and the ones that they should not have. All right. So the, the next one is, can AI replace experts? 

Jérémy Cohen: I would say it depends on how you use, the purpose of, of, of being yourself an expert and of, what kind of value you add. for example, if, if I were, I write a lot of blog posts on autonomous driving.

I write about Tesla, about lane detection, about a lot of stuff. Recently I wrote a blog post about Tesla in their end to end learning system that they will put in the next, FSD software. And if I do that with an AI, I would just ask for Tell me, chatGPT, how does Tesla's [00:22:00] FSD 12 work? And then they just share it end to end, or write an article about that.

And I would get what I call the fact. And if an expert gives you just the fact, You don't need the expert if you have chatGPT. The thing that they are supposed to add is what I also try to add in my articles is like, you don't just come in my articles for the fact. You come because you know, it's going to start with a story.

it's going to have a lot of metaphors. You're going to have personal experiences with the technology. You're going to have Other involvement, such as video from other startups, applying the technology. you're going to have, maybe some analysis of job offers related to end to end learning. So it's going to be much more than just an article.

It's going to be an experience. I think I see it that way. Maybe other people don't, but I try to make it that way. And if you call an expert and you [00:23:00] say, Hey, should I take paracetamol with this medicine? Then maybe at some point you can look it up on Google and just have the answer. And you still need a doctor to do it.

Like today, for example, we still trust the doctors to tell us, even though we have Google, we don't really. Naturally, just go to Google when we have a medical question, at least I don't, I don't know about you, but, I have a two year old, baby. And basically every time I have a question, I prefer to ask the competent human that any, even if the AI was competent, I would not necessarily trust it or take the decision on my own.

So the experts in certain fields like medicine could still be, good in other fields. Maybe we just don't. We don't want just an expert as an expert. We want an expert as someone who can also give us recommendation, give us examples, assist us, follow us along with the [00:24:00] process when we are implementing something.

all of that. 

Louis-François Bouchard: And what do you think if we like hyper personalize? the artificial intelligence or even if it impersonates someone like that, would this change anything? I still don't think it's going to work.

Jérémy Cohen: I think we are built to trust humans and to, to talk with humans and I think it can be a great Wikipedia, replacement or a great stuff to learn stuff or to validate some answers, to rewrite some things.

Sometimes I try to write stories for my emails and I don't have a lot of vocabulary in English, right? I'm not, not native English speaker. So I cannot describe my room the way an AI would, or just a novel would. And so in this sense, it can be useful, can be great. I don't think. I don't think we should trust it [00:25:00] to, or use it to the point where we just have conversations with AI.

As in the movie, you know, there is a movie where the guy falls in love with an AI, her I think, or something like that. I don't think we can get to, to that or we should get to that. 

Louis-François Bouchard: And so should people be worried about losing their job to an AI? 

Jérémy Cohen: If their job does not necessarily add value or value that cannot be easily automated, yes, they should be worried because, We today build cars through automated factories with robots, and that's how cars are built today.

We don't have a million people going somewhere in a factory and manually assembling stuff. And fortunately, we don't have that today. And I think we will say that in the future for Other similar jobs, like fortunately, we don't have to, [00:26:00] to make people just drive thousands of kilometers every single day, just to deliver some stuff, but fortunately we can have a self driving car doing it.

And that can be the same for many things. And for the others, like we want to focus on humans.

Louis-François Bouchard: Would you have, not a tip, but any insights for anyone regardless of their role to study or like figure out if they are replaceable or not?

Jérémy Cohen: Oh, that's a great question. first we, we can look at, at your job. Like, is it, is it something that only you can do or something that anybody could do?

So that, that's an important question. First, I have a friend who's like a surgeon in, heart surgery. And so not everybody can do that. And, and if we have experts, robots doing it, it's not going to be [00:27:00] for a lot of cases. Maybe they're going to be assistant in that they are, they already use a system for robots for that.

So that's already impressive. but it's not. accounting can be easily replaceable up to the point where you need advice, and then you need to ask someone, I for think autonomous, the accountability is, is digital, we have, I just have a software, it's online, it's easy, there is not much to do, I would not trust someone to do it if he had just the old paper, and then I have to send him the invoices by mail or something like that.

so if you are an accountant and you're watching this, going through the idea of giving advice, because not only I will give you advice, like you would with experiences, with [00:28:00] personal connections, with negotiation, with stories, with that, if you are an engineer. And you are writing code. I don't think that's easily replaceable for now.

Even if AI writes code, we did not shift overnight to a world where now code is written by AI. We have humans using AI, and even there it doesn't work. And if it works, they still validate it with other people, and it has to be coded a specific way. So, I would not say, I would say the jobs that are threatened by AI.

Are the jobs that are either easy to do, are very, very repetitive. Something like dashboard or stuff like, summarizing, a discussion. We could summarize a discussion with AI and you would no longer need describe doing it or if, if that even exists in startups. But you get the point, like, everything that is easy or repetitive.

All that requires [00:29:00] you to just do the same thing over and over and over again.

Louis-François Bouchard: And in the worst case, my opinion is that most people are not replaceable. They just need to learn to leverage the new tools. That are now mostly AI, like built AI based and as long as you keep up with the new tools and how to make yourself like more efficient or just improve the quality of your work through AI or whatever tool set that you can have.

I think you will keep your job. It's just like a content, as you said, prior internet versus after like it's, you will definitely take the one that uses internet, even though they were not using it before they just adapted and learned a new tool. Now the tools are just AI based. So I guess like staying up to date and just, just have a bit of interest into what's being done.

And [00:30:00] being willing to learn and to try out new tools, new, new apps is super relevant or just taking also like there are, I think most big companies do that, but you can take for me, online formations and, and just keep learning, I guess that now. The best kind of formations you can take is about new technologies to just like try to understand them and to, to at least start thinking of like what in your current job could be done either automatically or like more efficiently.

Through using artificial intelligence. 

Jérémy Cohen: Yeah. And I would say, I don't know if you like cooking, but I recently noticed that there are still a lot of cooking books out there being produced like every year. there are tons of books about. how to cook for vegan, how to cook for [00:31:00] that and that and that.

And even though there are thousands, there are still a hundred more every year. it's not because it's not because we lack recipes. There is no lack of recipe. We just want the angle of the people who are writing these books. We just like this chef and this type of thing. And, and if you ask an ai, can you write a recipe in the, in the style of that?

And that would just not work. We would still buy the book and we would prefer that. So it's important to understand that I think. That's an opinion, but we don't build AI to replace us, everywhere. We build it to replace us where, it's, it's, it's too repetitive for us to do something or it's costly or it's not funny to do.

Or we saw it with AI generation for images before, like at the beginning [00:32:00] of my, you can actually see today on my website, you have like some courses that have been built. the illustration by a designer, some by AI and some by the pre AI world when I was buying stock pictures and I would pay like 20, 30, 50 bucks for just a single picture.

And that was quite ugly. Now that I look at it, I'm saying, wow, do I really still have that on my website? All these people who are buying stock pictures, well, now it's going to be complicated to sell stock pictures because if you want a picture of a road, you don't need to pay 50 bucks to get it. You can just generate it.

so the, the, the, I think what AI does mostly is that it. It levels up ourselves, what we, what we expect from a service, what we expect from anything. But if you have someone who's taking pictures of roads, they still have their jobs [00:33:00] because they are still going to be people interested in their angle, like the recipe, the cooking stuff.

Louis-François Bouchard: In a, in a similar way, I was interviewed for a, like a local. TV program, they basically talked about the recent like controversy of, sport, the magazine sports illustrated that, that used AI to create fake people and fake articles or like generated articles and fake people. And you mentioned that when it comes to an advice.

It's definitely better to have a human. And so I guess that when it comes to creativity and to opinions, it's also better to have a human. So what do you think of, of the potential of AI in such like creative purposes of journalism? Art, entertainment. 

Jérémy Cohen: Personally, I use AI a lot. I, I'm a creative person and I have a very creative job.

I use AI a lot. I don't use AI to replace the AI write on [00:34:00] my content or to try and imitate what I do. I really use it as an assistant. I, I saw people in the AI space doing blog posts and they would just, take a. A question, then ask Chad GPT to write an article, copy, paste, add two pictures. That's done.

I don't think I don't believe in that. I know that some website got a lot of growth in traffic doing that. And then overnight there was a an SEO update, and then all these websites are good. I don't think it should be done for that. If you just need an answer for something, you can go to Google, take the first article, get your answer, and you're done.

Now, if you're writing creative stuff, if you're drawing illustrations or pictures, if you are writing, stories, journalism, stuff like that, AI is just a way to either rewrite what you have wrote or [00:35:00] make it better, validate, proofread it. or maybe just get the facts, like, can you write a summary of how the convolution works?

Give me three bullet points and explain it to a 10 year old. Yeah, that's great. you still need to tell it that it should be designed for a 10 year old and that sometimes I use it a lot for metaphors. You know, so before that, before I, I had an entire book of, a page in my notion with tons of metaphors and stories that I would keep and now I no longer need it.

I just say, Hey, find me a metaphor about in history, about, how this connects to that. And I want it for an article based on that. And it will automatically find me great metaphors after maybe 10, 15 minutes. And that's what I'm looking for, when I'm asking AI for something. I don't want it to replace my writing completely.

it's not going to be good. [00:36:00] and even if it gets better and better and better, people don't like fake. I think we generally don't like fake. We are repulsed by fake stuff. So when we create something, it should, it should feel authentic and I don't believe AI can be really that authentic. 

Louis-François Bouchard: Yeah. I also use it as a kind of a personal assistant, just asking to a lot of different stuff related to my, for example, for YouTube, for my scripts, I.

Yeah, I sometimes ask it to either rephrase something or to give me a very applicable and relatable example or, or just, it's a very good editor. Like it, it basically, now, now we are entering, are they removing the, the role of like the editor job? Maybe. But often it's just like a friend that you can brainstorm with.

And other times it's Someone that can [00:37:00] help you to, to, yeah, basically improve your texts in many ways, but it's, and just also generate images or external content. And it's just a very powerful tool as Google is, but much more efficient. I think it's a better, it's just a better internet. Like you can, you, I think you can find.

Anything you want on the internet, it will just take you time to read through all the relevant articles and then find the relevant example, and then maybe adapt it a little to your use case. Whereas ChatGPT basically does all that and adapts it exactly to you just automatically. So it's. It's a gain of time.

Yeah. Yeah, definitely. 

Jérémy Cohen: So if you write articles that are just around facts, you are no longer needed. But if you write better articles, then we may want to be interested in reading you. That would be the thing. 

Louis-François Bouchard: Hey, I'm interrupting this episode. To remind you to [00:38:00] leave a like or a five star review, depending on where you are watching or listening.

This helps my work a lot. Thanks to anyone taking a few seconds to click on the like button. Let's get back to the discussion. 

So we, we are talking about using AI for a while now. And this is a concern of mine. I basically saw a study that I'm sure you, you know, but they've looked into people using, Google Maps a lot versus people not using them.

And they've seen that when we were using Google Maps, it hurts our memory and, and just our brain in general, because we, we don't try to look for like signs that we recognize or just practice, like which road to take and et cetera. And I wonder if it is the same for like leveraging AI for any questions or things that we have to do.

And so my like question, [00:39:00] or slash The debate is will AI hurt human capabilities in general, or is it like purely net positive, just helping us? 

Jérémy Cohen: I think you could be right. Like I cannot drive home right now. Like, I don't know. It's probably 10 minutes, but I don't know how to get home. it's, I rely on Waze too much.

And it's going to happen with AI. Developers will rely too much on it for, to code. And the minute AI is not able to find a solution, and that happens. I tried many times to debug myself in code and it just, it cannot. In some problems it cannot. It just doesn't know. It's obvious. It repeats the same stuff.

because it's been trained on a data set. And so when you are, the minute you are very innovative, or you're trying a problem that is a bit edgy, [00:40:00] it may not know the answer. then how do you do if you forgot how to code or your, your, your brain hasn't been, used for, for this too much.

 That can be a problem. 

Louis-François Bouchard: And this just made me think of also the problem of hallucinations. Do you think hallucinations can be fixed? Can we, we scale up the current systems and build something powerful enough that we can trust it to give the right answers?

Jérémy Cohen: I think AI is trained. Based on a data set.

And it's about next world prediction as of today. So there are two problems I would say with AI and why it's difficult to trust. And the most obvious is not necessarily hallucination that this is a problem, but I don't know about you. I saw it much less today than, like six months ago. I think it's, it's vanished a bit.

There are still some [00:41:00] issues, but on the other hand, there is still a problem of bias to, to me, that's like, wow, for example, I use a notion, writing my strategy for 2024 and I wanted to just write stuff, get ideas on the table. And by accident, I wrote slash plan, 2024 and AI started to generate texts.

I hit enter and AI started to generate my strategy for 2024. And the first thing it wrote was. environment, sustainability, stuff like that, right off the bat. And that's like, just like that, you kind of see that there is a problem because it's been over trained for to, to answer about that, that the plan for the company is environment and stability.

And, and that's not my problem. My problem is like, it's a different problem. Like [00:42:00] every company has its own problems. And, and for that, If I ask an AI to, develop my strategies, even with a lot of, inputs, I would not trust it. I would definitely not trust it because I know that it has been trained with the internet and that you can find anything there.

And I don't just want anything for my business strategy. what maybe 3 percent on the planet would, would validate and that's it. And, and, and so whether it's, it's, it's that or the hallucination, I think there is a point where you have to know when to trust an AI and when to not trust it, and there is a threshold.

And sometimes people over trust it at the beginning. And now it's a bit the opposite. And there will be some balance. Ultimately, you, you, you want to be careful with it still, even if it's perfect, not making any error. [00:43:00] you don't necessarily know how exactly it's going to help or be helpful. If you need advice, like it's the advice from who is the AI, but it's between a million people.

So who exactly? Is it the advice from, from someone in particular or a group or a forum, and based on the forum on the way they do their stuff on their political orientation, it can be totally different and not suited to you. 

Louis-François Bouchard: Yeah. And how do you say, I guess that if we keep with like the current transformers, transformer based architecture, we are just doomed to having hallucinations because as you said, it's next word prediction.

So it, it doesn't have. And a good understanding of the concept and of, of things, it just knows what's the best next word to say. It's, it's definitely not, like, for example, I can, right now [00:44:00] I'm trying to, to speak and since it's not my, also my native language. it's often hard to come up with the next word.

And so I just choose another one or I just skip it. And the phrase, the sentence still makes sense, even though it lacks a word. So like it, there's definitely, it's, it's definitely not working as our brain does. And I feel like we definitely need to change and innovate in the AI space. If we really want to reach like AGI or something that is more intelligent.

And until then. there will definitely still be hallucinations and like very dumb behavior that will keep happening, which I don't think is necessarily bad. And in fact, I think it also helps for the question that we talked, earlier about for hurting human capabilities. Basically, I, since my last discussion with a great guy, Kenji in the, in the field, he [00:45:00] mentioned that.

To him, AI hallucinations were more a feature than a bug, and I now kind of agree with this. I really like to see it that way because on my end, I've been using it a lot for coding or just for, for, for my scripts or anything really. But since I know how, how it works and that. It can just say anything and can hallucinate.

It forces myself to be sure that I understand what it tells me and to double check on Google when I can or to double check by asking someone. I feel like it just, this hallucination issue, is something that is like helping us to retain some Independent capabilities to, to understand and to learn new things rather than being completely dependent on a like super intelligent extra thing.

Jérémy Cohen: And actually, sometimes I ask it questions and I automatically know when it's, [00:46:00] it's wrong. Like, I know I have a detector. I don't know about you, but I have a detector on. Oh no, he's full of crap here. It's like, obviously he's wrong. it's wrong. but anyway, You know, it's like we are trained now to detect these errors because we've, we've been exposed to obvious errors from the beginning and now we don't really trust it.

And so it's better for us, I would say. Okay.

Louis-François Bouchard: My next one, is, is related to, to hallucinations and just the data being trained. And it is that similar to journalists, we all have biases. We, we all have opinions. And even though I guess most journalists try to be as objective as possible. They still have their own opinion.

And it's, I think it's impossible for a human being to be completely objective, even if you try. And so I wonder if we can [00:47:00] remove. All human biases from an AI that we trained on, on human texts, do you think, or, or on human content at large, do you think it's possible to, to build a neutral entity or something that doesn't have biases?

Jérémy Cohen: Well, as you said, there is no unbiased article or anything. you see it with the coverage, with the, the war in Israel, correctly, it's Everybody has an opinion. And I think it's, even if you just show to show the facts, the simple fact that you choose to show five minutes of that side and three minutes of the other.

And so there is two minutes difference or to talk more about that than that. Just all of that is, is bias. It's like, It's human decided, deciding, and even if it's equally split and all of that, even that is a bias [00:48:00] to represent something as this is equal. So, so it's, it's obviously all biased and so is the entire internet.

And I think we don't even have, I don't see why people try to look so neutral because it's obvious that. we are not neutral and we are not unbiased. So if we are not, so I don't say we should all go extreme and say like, Oh, this is like, I'm not saying that. I'm saying that I don't know if trying to be neutral, why secretly not being neutral and why trying to not secretly not being neutral, but having no other way than not being neutral is a solution.

We just can't. So. we should try and see like, okay, assuming I am not neutral, what now, what's happening and how do I make it so it's still a fair thing? [00:49:00] so it's in journalism, but it can be just in the way AI answers stuff. if you ask it about. political stuff, you will see that it's a bit biased.

And if you ask about genders, it's going to be biased. And if you ask about, any topic basically is going to, any topic that is a bit political, you're going to see very easily that it's biased. So, it's not necessarily a bad thing. If you just say. It is biased because 80 percent of the data is like that and just 20 is like that.

But if you try and hide it and appear neutral, then there is a mistrust issue. 

Louis-François Bouchard: Yeah. And is it possible to have it like such an intelligent entity identify whether things it learns are biased or not, and to figure out to not be biased? Like, I guess everything is in the data and data distribution [00:50:00] and et cetera, but Since it has opinions.

Yeah, that are definitely skewed towards one site, but it has all opinions. So maybe we can, 

Jérémy Cohen: it can just tell you where, where, where it comes from. Like, you know, like in the new update of GPT4, what they do is they show you like, we are browsing this website and that is great that we love that because automatically we know if it tells us this.

It's perfect. I think everybody is happy with that. We know we, we say, Oh, okay. So this is why I have this answer. so explainability at first, I did not think it was such a big thing. We were saying, okay, so we have black boxes and we just have that. That's it. why spending so much time trying to explain the result, but I was in a very self driving car scenario.

Well, I would say I don't see a point in explaining why we classify this as a stop sign. it's a stop sign. We have the features, just move on. [00:51:00] And now when we are in this level of AI, it's, it's no longer the same thing. It's, we need explainability. We need at least source of Where does this sentence come from?

Louis-François Bouchard: My master's thesis was in explainability in computer vision. And so I, it was just an interest of mine to understand how it could classify things and it could understand, but I also didn't see any really, any real use case of why did it understand that a cat was a cat? Like it's, it's not really. relevant.

It was just interesting to me as a, as a researcher, but LLMs definitely change everything. Like it's, it's super easy to, to see that we want to understand why it says that and why not? Like, for example, if we, if we talk with, our friend or our family, we, and, and we ask them for, for an advice or something, we may question like, but why, why not do that instead?

And they will give like [00:52:00] experience or, examples or like where they, they. learned it from. And if we trust the source or whatever, we will trust them. So it's, it's just the same for LLM sharing 

Jérémy Cohen: knowledge. It's all about trust. we want to trust it. And so to trust it, we would need, we would need transparency.

Louis-François Bouchard: That's exactly related to the interview I had yesterday. They asked me why was this such a, a big issue. Like it went viral that, a big magazine used. AI to generate articles and fake people. And that's exactly what I said. I think that the main issue is what is that they lacked transparency. Like they didn't say anything and they, they just tried to, to fake it.

And just that it wasn't spotted and the articles were good. Like they, they were. not, I don't know if they were viral, but they were [00:53:00] being read and people enjoyed them. And I think like, there's no issue with AI having generated a good article. Like if the article is good, it's good. It's irrelevant who wrote it and what wrote it.

But I think what's relevant is like saying. This was automatically generated, or this was made by this person. Like they usually credit the author. So why not like say that it was automatically generated? I think, yeah, the main issue is with transparency and, and trust. As you say, I completely agree with that.

Jérémy Cohen: Because people don't want to lose the credit of their work. It's like, let's say I make an illustration for my course with AI. I spent two hours doing it. I did not just write, make an illustration. I had to review hundreds of different ideas. I had to change it. I had to, even after post produce, add, add algorithm [00:54:00] output.

I, it's more like. AI is helping, so we should write maybe done with the help of AI or something like some, some little icon or something without it being like it's a hundred percent AI generated. Or maybe we should have that distinction rather than just AI versus human having AI humans plus AI. and humans.

And so, and so this way we would better understand, and people would be more inclined to say human plus AI and everybody would be in that category. So there would be no shame in doing that. if we do that, maybe. 

Louis-François Bouchard: Yeah. I had a discussion with, the VP of editorial at Hacker Noon, which is a platform to share articles.

And, and they, they were in the, they were going to add. tags, specific tags for [00:55:00] like AI generated or AI edited articles, but not forbid the use of AI, just like be super transparent that it was either generated or edited with artificial intelligence. And I think that's just, that's just perfect. Like if someone doesn't want to do.

To, to have like any AI thing in their feed, they will just like unclick that tag and have only human written articles. As long as everything is transparent, I think it's completely fine to, to use any help. To me, I, when right, when, when it comes to writing, not automatically, but you, I see using AI as just as using Grammarly or.

a friend that has like good writing skills to review your work. It's just an external help that helps you see your own writing differently and that can suggest things to add. For example, I recently shared a [00:56:00] video on why I quit the PhD and on my initial script I sent it to to Chagibiti and asked like, basically ask what did it think and if I should add anything relevant or like how, how to make it better basically, but I, I, I still put it as a list and, and try to have more specific feedback, but one of the feedback was that.

It seemed to have too much things against the PhD and maybe not enough for the PhD to share basically what I enjoyed while doing it because I really did enjoy it and I still do enjoy it. I just think that it's not for me anymore and that I want to do more real world stuff, but I didn't like share it.

clearly enough in the, the initial script and without this help, this external AI help, I would just have shared maybe a very like negative video about the PhD. [00:57:00] Whereas it was not my goal at all. I, I still think it's super relevant to many people, just not for me. And so anyways, I, it, it was like, amazing feedback to improve the quality of the video and the value of it.

Jérémy Cohen: But I would say it is biasing you to go neutral. And I would say, for example, like two days ago, I was, I was launching my course on BirdEyeView, which is a way, when you have self driving cars, you look at You know, with the image and you have vehicles and all that stuff, and then you just want to have a bird eye view from the top.

So you can like have the understanding of the context and all that. So we use that a lot in subject cars. And at some point, I wanted to write an email. The last chance email. So, you know, I have a series of email. I send maybe, five or 10 of these in a short period of time where I'm going to send stories and stuff about Berlin view and people love reading them and the last chance.

So [00:58:00] what some people do is just say, Hey, last chance, buy my stuff. And I wanted to write a story about a plane taking off. And people just not missing the plane. And so I wanted to have the story between some kind of general and, and, and the assistant or someone like that, and then have the general being super angry that the assistant wanted to leave now and not wait until midnight.

Which was the moment the course would close. So, I don't know if you follow, but the idea was basically, I wanted to do some kind of mean stuff where, where the general was super angry and say, we are going to wait for every single one of them. They have five minutes to join the course, blah, blah, blah. And the AI, chatGPT told me, Oh no, you should not say that.

You should try and make it. happy about the idea. And I was like, no, no, I just wanted that. That was funny to write that. I just wanted that. And, and I asked [00:59:00] it like three or four times to rewrite it so that I could have that. And, and, you know, I like when you rewrite some paragraph because there is the details and all that stuff.

And in the end it could not. So I just wrote my own thing and published. And that was like, I was against the AI. I want it to be mean and he did not want me to be mean. So, so that, that can happen too. 

Louis-François Bouchard: Yeah. I see how it definitely puts things more neutral or like friendly. Yeah, I guess it did change my article but I also, like, basically my whole thing is that I wrote a video on On my PhD, like this summer, and I never published it because after, after a few months after this writing the script or like a few weeks, I was starting to think of quitting.

So I never published the video about the PhD and what I liked about it. Yeah, and it's, it's just [01:00:00] to say that I did have things to say for the PhD and it just reminded me to, to, that it's, it might be important to highlight some of it, but I still. It's not like a half and half video. It's still definitely more on the, on the plus side of the startup world and like all the things I do, but still I, it, it gave me like really good cues on why it would be relevant to add like.

this thing or, or, or this thing and I, I felt like it was, it's also good to cover both sides, even if you are for, for like more for one side. But I can see that you definitely need to be careful, especially if you want to like for some humor or like. More opinionated thing, you definitely want to maybe not take into account all it says, because it's definitely like politically correct and biased towards.

Jérémy Cohen: Yeah, like if you want, like I write a lot of emails, so [01:01:00] obviously I wrote like this morning, I wrote email, a thousand and a hundred. So, so imagine in a thousand and a hundred daily emails, I often give my opinion. I often say stuff that is politically incorrect and I want that I want, because that's honest and that's transparent.

And that's me. And if I ask the AI to just verify every single one of my articles or my emails, they will always be very neutral. And so very boring in a way, because people like also, I'm convinced that if you did a video, a hundred percent against the PhD. That would probably, in my opinion, be more interesting and then maybe you can do one for the PhD, but you know, when you're not like when you're saying what you mean, and you're not just hold held back by, by something.

What you write is often better. I don't know. Sometimes when I do videos or [01:02:00] write emails or stuff, I have two modes. I have the fury mode when I just say anything that goes to my mind. And then I have the, the okay mode, the thing that is okay. It's like going to pass everywhere. And always the fury written.

Email is always better. It's like, wow, we want to read that. That's interesting. That's engaging. We want to answer. We want to give an opinion. And the other is like, okay, so I have the good. I have the bad. I have the summary. I have the explanation, three bullet points. That's good. That's informative. so it depends on if you want the information, the fact or like really the human side of it.

Louis-François Bouchard: Yeah. I have two last question ish to, to ask. Basically, in my case, it's a bit different for, for the first one, because my, my father really likes what I do and artificial intelligence. And he's interested in learning about that, understanding AI, and he's now even using [01:03:00] chatGPT instead of Google, but Other people like my mother or most of my friends do not use chatGPT and I assume this is like the easiest to use, but they still don't use it.

And so what do you think it will take for like the general population, such as my, my mother or friends to, to use artificial intelligence?

Jérémy Cohen: I have the same parents as you. So my father is like, he, he, he loves chatGPT. And sometimes he, the other day he told me. we have a dial, there is a dial now, and I was like, so it's in French.

But the word dial is like, he just wrote, and I was like, what's a dial? He said, yeah, that's incredible. And that was actually dial E, the image generator. And, but he was so excited about it and he was using it and he was changing his website and he was trying stuff like that. And my mother, I think she doesn't even realize what that is.

[01:04:00] but, but, but what we have is if we want my model or your model to use it, first, we would need a good reason for them to do that. The reason you and I use it is because it's solving a problem for us. That is that we need to go through 50 websites just to get an answer of something. My model does not really have that problem.

She likes going through some website and that's okay with her. She doesn't have a content creation job or anything like that. So she, she's fine with it. she's using AI. When she's watching Netflix and using the recommendation system, even if she doesn't know it, this is what it is. Now, maybe at some point she will ask AI for some answers, but it would need to be like Google is trying to do when there is, you know, a sidebar and there is the AI answer like that, maybe very embedded, almost invisible, like we don't know it's an AI, then it's mainstream, if you cannot tell that.

This [01:05:00] show is being offered to you because of an AI and even on Netflix, illustration is different based on who is looking at it. Then that's mainstream. 

Louis-François Bouchard: Yeah. I, that was my exact opinion of it needs to be like extremely well embedded in the platform that they already use. And I feel like even if it's super, chatGPT is super accessible.

It's just a chat, but a chat bot, like you, you can just. Right. Anything and it will answer. It's still another application, another thing to, to subscribe, even if it's free and to, to go on to, to, to look for things, whereas. I assume that just less just like Siri for iPhone, I guess. Like if it, if it asks if you, if it becomes more intelligent and uses AI like it, it does.

But if it is just become better and better. People will be, in fact using AI just by using their iPhone. So just, just like the auto character, I guess. [01:06:00] If they make it like more and more AI based. Yeah. Invisible. Yeah, I think. It's funny because we want it to be transparent, but it needs to be invisible to the user.

Jérémy Cohen: Yeah. And, and I think also there is this idea of, you know, the hype cycle and. So maybe we've been with ChatGPT early adopters and our fathers are the middle that joins right now. And there is also like my wife now is using ChatGPT to ask questions. and three, four months ago, she did not really know what that was.

And maybe then there will be the late adopters that will just like. It's just going to be too obvious not to use it too accessible to like first solution offers versus searching the web. automatically present on every website, like the chatbots are right now, when you join a website, the support is now chatbot based.

You can have that. And [01:07:00] that will make the late adopters join. 

Louis-François Bouchard: it's a random thing, but on my end, I managed to convert my girlfriend to use mid journey for generating images. But that was like before Dalle 3. And I need to get on your level and. And teach her to use chatGPT or like when to use it. 

Jérémy Cohen: Yeah.

And MidJourney is actually also good. The only thing is that you cannot, at least for now, you cannot really use it to write, to Chat, you cannot write to it as you would like to. I don't know if you've tried chatGPT with Dalle, but it's really amazing when you say just change that to that and it's going to work in the other.

You need to, to, to write some weird keywords. Like I use the keyword double exposure a lot. And so it's a keyword when you have two images at once. So if you say a cup of tea and, [01:08:00] a garden and double exposure, then you're going to have a cup of tea. Maybe not that, that, but an eye that is also a portal.

And then you have the double exposure of like, you have the eye and also you have a portal to something else. You know, I'm using that a lot and I don't like to use that word. It's like, it's complicated to, for me to think of it in there. This is the only word I know. So I use it everywhere and there are millions of words I should learn on Meet Johnny and, and I just like every update makes it.

More difficult to use and to think of, and there's just too much. And I think you either become a, you can become a pro at that, but at some point they're going to catch up on GPT. And so that's going to be useless. 

Louis-François Bouchard: Yeah. I really like what, what OpenAI has done with. daily three embedding it in, in chatGPT.

And now it just like reformulates all your prompts based on the, the, what you asked, but also the conversation before and it's just [01:09:00] amazing. But yeah, so my last question is, I think. Personally, that, to the contrary of what we thought before AI was said to like increase the gap between poor and rich by like allowing the rich people to better control like everyone else, everyone else.

And now from what I see, of course it helps like companies make more money, but it also democratizes a lot of stuff for. Anyone. And I see it like at a personal level that it allows me to do much more than I could and to learn way more things. And a lot of other people that do not have access just because it's free.

So it's like, you can use it to, to build stuff and to do stuff you never could before. And so my question here is, do you think that AI is helping democratizing? More things, more industries and tasks, [01:10:00] or is it more benefiting the big companies and the, yeah, the, the richer people or both? 

Jérémy Cohen: I think it's funny you talk about that because I was looking at something the other day.

And it was about rich versus poor and versus accessibility of stuff. And it said that basically in 1953 in the US, we had about 1 percent of person of people that were very wealthy, 4 percent that were financially secure, 15 percent that were completely able to retire. So you're like 20 percent is incredibly good.

And then 80 percent is like, struggling, a bit broke, almost broke. they consider themselves like very middle class, humble, all of that. And in 2012, they run the poll again. So that was 1953. In 2012, they run [01:11:00] the poll again. And it was exactly the same answers. Like even imagine everything that changed in 50 or 60 years, everything that changed, it's all much easier.

it's all like before that, to, to just like for me to run my job in the 1950s, that like, other than the idea of self driving cars, just teaching innovative stuff to engineers, that would be crazy. And now I have all this potential, all this AI, all this stuff. And we still have the same ratio of rich versus poor versus middle class versus that.

That's that we still have everything the same. So I don't think that more access to more information and more access to more tools. Just close that gap. I don't just looking at the data. And unfortunately we, we don't have, we still have like 20%, [01:12:00] being extremely wealthy and the other like struggling.

And probably we will have that. with AI as well. I don't, I don't think, I don't see a reason why this would change, because with everything that we had in the past, it did not change.

Louis-François Bouchard: Like internet definitely allows us to create some kind of product more easily than like physical ones. And likewise for artificial intelligence, for example, if you just can code right now through using English rather than learning, JavaScript or whatever, it's also making it more accessible.

But would you say that ultimately it's still, I guess, like the, the distribution of people that are more entrepreneur versus entrepreneurial versus others that like, I don't know, that follows like school and the more traditional path, like, is it because. [01:13:00] Even if it's more accessible and, and, and easier to, to build something, they just aren't willing to do it.

And it's just based on like the human 

Jérémy Cohen: brain. No, it's more like, it's more like AI is not here to make you rich. It's, it's not here for that. you, you don't get rich by using tools or by using, more access to more information. you get rich by mindset. This is like, I think most people who are poor or who define themselves as poor, is of mindset.

It's like the simple idea that you use the word poor versus I would say broke. It's like, there is one that is temporary and one that is eternal. Like poor is like. I'm poor. That's it. If you say I'm broke, it's like, I'm temporary struggling and I will get back. It's all mindset and it's all education. [01:14:00] And so that's probably what we would need to work on.

Rather than, Like education with money, with, with, with how to, see yourself in the world, with how to see yourself learning stuff. That is, is I think, more important in making, and this is why, like in my emails, I'm writing a lot of what people call career emails and, and I'm writing a lot of that, like.

How to negotiate, how to build your resume, how to do that. All the mind stuff. not really implementation, but more like, you should see yourself as valuable. All that stuff. And people are like, Hey, I don't want that. Like, get off with this. You know, I just wanted the house or technical stuff. Give me that.

Like, like the occupancy network and the 3d stuff. Just give me that. Don't give me the career stuff. [01:15:00] And they realized later. Then the people who studied the carrier emails and implement them get much more result than the people just scrambling for every possible article on a technical topic, because that's the mindset that produces the result.

That's the idea of, I understand. how to position myself versus another person. I understand how to say no, when to say no, all that stuff is incredibly valuable. And we've had a lot with personal development where people, I think it helped a lot of people. but, but I would say it's like much more efficient and a bigger reason of people who could become rich by their mind than just knowing how to use another tool.

Louis-François Bouchard: Yeah. I couldn't agree more. I guess that like everything is in how confident you are. And that's just like, there are many studies that just confidence is sexy basically. And, and just, [01:16:00] it allows also, I think just by being Confident you, you tackle more and you try things you think you couldn't do. Even if, even if you personally think you, you cannot, you will fail.

If you are confident enough, you will still try and like worst case you fail. And that's. Yeah, I think that's where the, the power of mindset, as you said, lies is it just allows you to push more and do more. Whereas if you are not confident and you, you just, yeah, I guess that if you, if you lack confidence, you will, you will just try to do less and you, you will give up before it succeeds.

Just for example, in my case, I wouldn't say that it's like a big success or whatever, but. my YouTube channel works and it's, I think it took, I don't know, like at least. Eight months or more to get to my first thousand subscribers. So before that [01:17:00] I had like, I don't know, be below a hundred views per video.

And I did two videos per week or one, like sometimes one, sometimes two per week for eight months before it gave me anything. And that's mostly because I was fully anonymous. I didn't use my real voice. I used the. text to speech instead. And it was extremely low quality. So it's, it's like very bad, but, and, and I, I didn't want anyone to know that I was doing that.

So I didn't share it. So that's why it didn't have any reach, but still I consistently shared and I tried and I just. I don't know why, but I knew that some, someday it would work, even though the quality, like looking back, it's, it's, it's horrible. But I, 

Jérémy Cohen: I saw people doing AI generated courses and, and I still have the same opinions.

Like, why would you want that? it's more, I, I'm not sure. I, I mean, maybe for explanations, like You just have drawings [01:18:00] and you have arrows, but even there, it's better to add the human face and the smile and all of that. 

Louis-François Bouchard: Yeah. I think I also preach when, when, when it comes to learning, I preach to, to learn with multiple modalities.

So basically like learning through videos and just live events or writing your own thoughts or reading. Like just all the different types of ways you can learn is very beneficial. And just, just reading an article that is AI generated without fun facts. That is what you remember, like for, from high school or whatever.

The only things I remember are the fun facts or the things. The cool things that their teacher said. So yeah, that's definitely useful. yeah. Awesome. So is there anything you would like to share with the audience? 

Jérémy Cohen: I think we covered a lot about what AI is and what, more than what AI is. How do you deal with an AI world?

if you are, [01:19:00] an engineer or anyone who is like, either you feel threatened by AI or you just don't understand how you can use AI, it's more like what we discussed. Maybe it would be worth to Maybe read the transcript as we discuss multimodalities. I think when you read stuff, it's much better, especially interviews.

Like when I read interviews, it's amazing. and when I just listen to them, it's, it's also good, but I think reading adds a lot, and try to think of all of these concepts of how can I really be valuable in, in this society? Like, I may just giving the fact, or am I giving my entire self? And, and of course, if you want to learn more about self driving cars, Luis will give you the link to, to, to join my emails and read them.

And, and these are free and you can join, on thinkautonomous.ai 

Louis-François Bouchard: yeah, I definitely recommend, recommend it. And the link, it will be the [01:20:00] first one description in the description. So yeah, thank you very much for joining me. It was. a very fun and nice discussion and I'm sure it's helpful in many ways for various people.

So yeah, thank you. Thanks a lot for taking this whole hour and a half to, to chat with me.