Welcome, (upcoming) listeners! I’m thrilled to bring you this new episode. Today, I received none other than Paige Bailey, a trailblazer in AI product management and a visionary who’s been at the helm of transformative projects at Google DeepMind and GitHub, working on the most advanced LLM projects.
In our discussion, Paige shares her insights on the current state and the potential of generative AI. We touch upon Google’s PaLM 2 and its implications for the future of machine learning. Paige offers a glimpse into the philosophy of democratizing technology, ensuring it serves as a tool for empowerment and inclusivity.
But this conversation isn’t just about the abstract future of AI; it’s about its tangible current impacts. How does AI-assisted coding alter the landscape of software engineering? What does the integration of AI in education mean for our learning systems? Paige addresses these questions with clarity and a forward-thinking perspective that is sure to spark you curiosity and interest.
I found my conversation with Paige to be both humbling and illuminating. Her experience with AI’s capabilities, limitations, and the ethical considerations that come with it offers a comprehensive look at the responsibilities of those shaping our digital future.
So, if you’re intrigued by how AI is redefining productivity, curious about the collaborative synergy between humans and machines, or simply eager to hear a candid discussion about the safety and accessibility of AI, this is the podcast for you.
Tune in to join us on this exploratory journey into the heart of AI innovation. Let’s demystify and democratize AI together!
Full video transcript
Paige Bailey: [00:00:00] There’s a great quote about how the most transformative technologies unlock experiences that were previously only possible for the ultra rich. So Uber gives you this perspective of having a chauffeur, right? Instacart is like having a personal shopper. And previously, those are things that were just possible for the ultra wealthy, and now they’re possible for everyone.
Paige Bailey: No matter how many accelerators you have, if you’re training a massive foundational model at Google scale or at OpenAI scale, it’s going to take you months. For pre training you know the process of building the model in the first place, obviously this is incredibly important to get data added. If you miss the bus for, for getting your data added to the pre training mixture, you’re going to have to wait months.
Paige Bailey: In order to get your next shot, or you would have to wait until the initial model completes and do continued retraining.
Louis Bouchard: This is an interview with Paige Bailey. Paige is [00:01:00] currently the lead product manager at Google DeepMind. working on PaLM 2, Gemini, and a lot of other products. She also was the principal product manager at GitHub when building Copilot. We had a very interesting discussion on generative AI, with lots of insight on how to get started to build your own product or get into the field.
Louis Bouchard: I hope you enjoy this interview. And if you do, please don’t forget to subscribe or leave a five star review if you are listening on a streaming platform.
Paige Bailey: So I’m so glad to be here and to get to talk to you today. My name is Paige. I’m a product lead at Google DeepMind. Previously I was the product lead for PaLM 2
Paige Bailey: so the model that was announced recently at Google IO and then got dispersed into lots of different products around Alphabet. I think. Last count, it was something like 38 products and features. And I currently work on a couple of things at GDM. So one is our code AI efforts. So machine learning is applied to software development, everything from code generation, code completions to kind of [00:02:00] performance optimization, storage optimization, bug fixes, build repairs, all these interesting things. And then also helping out with Gemini. So, so particularly helping with getting, getting code data, getting math data, and then also making sure that we have the right ethos.
Louis Bouchard: Also, you mentioned code generation, but you also, you worked with GitHub on Copilot before that. So that’s. I assume pretty much one of the first, like, very good code completion systems.
Paige Bailey: Absolutely. So I, I spent, during the pandemic, I spent just over a year away at Microsoft working on a lot of things.
Paige Bailey: So helping with GPUs and code spaces. Getting to work with the VS code team on their machine learning features and then also getting to help with copilot. So I was able to do some user experience testing helped kind of craft some of the, some of the behavioral patterns for it. And then also was able to was able to [00:03:00] give feedback pretty early on to the team on kind of developer experience.
Paige Bailey: And I still definitely use copilot. I think machine learning features within the context of IDEs are getting to be an expectation from users and not just code completion and not just code generation, but also lots more smarts getting baked directly into these places. So, so copilot was revolutionary in that respect, and it’s really inspired a lot of other tools around the ecosystem.
Louis Bouchard: What’s your opinion on? Using, for example, I will take the example of chatGPT just because that’s like what most people use, but for of using chatGPT as a copilot versus using something directly built in the IDE, because on my end, I really like just going on ChatGPT and ask. Like, can you call the function that does that or does that?
Louis Bouchard: And then I can just use it. But I’ve, I must be honest. I’m, I haven’t tried, I have tried [00:04:00] Copilot, but I haven’t like deep dive into it. Really. I mainly prefer to, to have, I don’t know why, but I prefer to have an external chat where I can just kind of prototype. And then, try in my code. I don’t know. I what’s your opinion on that?
Louis Bouchard: It’s like chat assistant that you can use for coding versus the direct copilot in the IDE.
Paige Bailey: Yeah, I think it’s a great question, and we’ve observed something similar when working with Bard. So Google has gone about introducing machine learning features into several IDEs throughout our death tools ecosystem, both internally and externally.
Paige Bailey: So you might have seen that Collab recently introduced code completions, code generation, and then also a code Q& A feature similar to Android Studio with StudioBot, the Duet AI features that are baked into GCP, and then also many of our [00:05:00] products that support Googlers internally. And. I also agree that having more of a chat like experience feels more like having a thought partner as opposed to, as opposed to having something automatically generate code within the context of the IDE.
Paige Bailey: I use both, but if I’m trying to brainstorm something or just getting started, then a chat experience is a little bit more welcoming. It feels a little bit like if you’ve heard of the concept of rubber duck debugging having somebody, even if it’s just kind of sitting there to bounce ideas off of is, is super helpful.
Paige Bailey: But I, I definitely do both and, and I think we see for the most part people when they, when they ask questions in a ChatGPT like, or Bard like environment. They’re, they’re just kind of getting started brainstorming. Whereas if you’re within the context of an IDE, you’re typing out something, you know, more or less what you want to do, [00:06:00] then you get into this state of flow.
Paige Bailey: And it’s, it’s a little bit easier to just have the completions.
Louis Bouchard: From what I understand, you, you say that like most people would go on chatGPT to. Start this new venture, like start learning and start like all giving the, basically you, you get the, the pip installs and everything that you need to get started, which is much more valuable than a copilot for getting started.
Louis Bouchard: But then if you have experience, a copilot might be like more efficient or more useful.
Paige Bailey: That feels right. I definitely think that for just getting started or finding out what you should be doing, having more of a conversational experience, like Bard or ChatGPT or whatever it might [00:07:00] be, makes a lot of sense.
Paige Bailey: So as an example, I have zero expertise creating Android apps. Like, I have done it a couple of times, but for the most part, I would have no clue how to get started building Android apps. Even like I know for the most part, like Perhaps Android Studio might be involved, but I don’t know what libraries, what files.
Paige Bailey: I know Kotlin might be a thing that, that I would need to care about or Java. So I would probably go on Bard, ask something to the effect of like, Hey, I’m thinking about creating a mobile app to do this thing. What would I even know or need to know in order to get started? And hopefully Bard would be able to say something to the effect of like, Oh, well, great to hear that you’re building a mobile app here.
Paige Bailey: The things that you might need to care about. And also you should probably be doing this in the context of an IDE like Android Studio, in which case I would be kind of off to the races. But similarly, front end development. I have no expertise in that. I can write back end code. I can’t write a [00:08:00] front end code.
Paige Bailey: So having somebody kind of nudge me in the right direction, tell me what frameworks I need to care about, tell me what libraries they would need to care about, or what languages is super, super helpful.
Louis Bouchard: So I understand, like, I understand what I need to do. I just don’t know the functions and how to write exactly. And, and so to me, like, it’s easy to leverage ChatGPT, but I feel like it may be way harder for someone that is completely new in the field and don’t know anything about coding. My main concern is because I know what the code is supposed to look like, like overall and what to do.
Louis Bouchard: So I can detect like, that seems like a hallucination or something that is wrong here. So I, I understand, and I, I know what to edit or to quickly change. And so I’m super efficient with it. But then if you are completely new and [00:09:00] don’t know anything about programming or the specific tasks you are trying to use, ChatGPT to help you with, it might not be helpful at all, or like even get you some problems.
Louis Bouchard: If, for example, you deploy a website and you don’t make it secure enough or like, whatever you just follow. blindly what, what the language model tells you. So my question is, how are you trying to help with that? With these hallucinations or things that like beginners cannot really know, and they just follow language orders.
Louis Bouchard: Like I know that, for example we’ve built like rag based approaches where you have a memory and so you, you try to reduce hallucinations risk using external memories and knowledge that you control, but how can you do that? How are you trying to do that within the language models that you create?
Paige Bailey: It’s a great question.
Paige Bailey: I think many people are experimenting with such things [00:10:00] in the product space around language models. So, within the context of an IDE, you could imagine a feature baked into something like Copilot or something like a VS Code extension, where, as you’re writing code, Perhaps it has a second pass to take a look and see if there are any security vulnerabilities within it, or if there are any kind of data structures that could be re implemented such that they would be a little bit more efficient and these things aren’t necessarily baked into the first output from the model itself, but would require recursively calling the model to ask how to improve based on these particular patterns that that the model might be able to see and then kind of self repair. So, my expectation is that we’ll be seeing more of those.
Paige Bailey: I also think that there are really excellent tools around using execution feedback, or [00:11:00] interpreter feedback, or compiler feedback, so that if the model generates some code, you can see if it’s syntactically valid. If it’s using the most up to date APIs and then also these things like performance and security considerations.
Paige Bailey: And then, of course, over time, the model gets better and better because you implement these feedback loops. But for the most part, this is still something that everybody should be very, very aware of. And one thing that, that kind of keeps me up at night a little bit is, you know, there are so many, there are so many folks these days who are.
Paige Bailey: You know, taking physics 101 or calc 101 courses. And I, you know, the, my nieces and nephews, the first thing is like, all right, well, I’m just going to copy and paste this into ChatGPT, or I’m going to copy and paste this into Bard and then get the answers back. But that doesn’t really teach you. You know, the, the pedagogy isn’t there.
Paige Bailey: Mm-Hmm. , it doesn’t really teach you kind of step by step, how might I approach this problem? How would I solve it? How would I think [00:12:00] empirically? And so I, I feel like what we need to do and need to be focused on is helping people understand kind of concepts so that they, so that they learn the fundamentals of being able to program mathematics, physics, chemistry, whatever it might be.
Paige Bailey: Yeah. In which case the generative models become an ex. It’s kind of like an accelerant, like something that helps them be more efficient but that isn’t just, you know, the, the tool with the answer that people accept blindly.
Louis Bouchard: Yeah, what I see could be happening is, is similar to Khanmigo I think it’s called, or like other, I think CX50 is working on a language model as well.
Louis Bouchard: But basically, if you cannot control the, the, the students using large language models, might as well just give them one that is like good, but fine tuned in a way that it doesn’t give the answer. It just like leads you through. Understanding and finally [00:13:00] getting the answer on, on yourself. Yeah. Are you, I wonder, I don’t know if you can talk about this, but are you working on some other specific use cases of large language models like, like this one, or, or like you are trying to, to build a foundational model where.
Louis Bouchard: Other people can build upon it.
Paige Bailey: So I am not part of our education teams. So these, these topics would kind of fall squarely into the, into the charters of the education teams. But I will say that the, that this concept of having kind of a baby model that that it’s been fine tuned in such a way that in order for it to learn generalized concepts, the user it interacts with has to teach it.
Paige Bailey: So you could imagine like a model that a model that perhaps isn’t very good yet at code related kind of [00:14:00] partners and learns with the user. Who is in the process of learning computer science concepts, maybe the model gets better over time because it sees more and more examples of correct computer code and it understands kind of the conventions that the user is interested in following.
Paige Bailey: But that’s not something that I work on personally, I think, I think it’s a, it’s a fascinating concept and it’s, and it’s something like, if CS50 does this, then all teachers will rejoice, um, because, because I think everybody is trying to figure out, like, how do I design my curriculum such that knowing that generative AI is going to be a tool that students use, that they have it as something in their toolkit, but not something that they’re relying on over much is, is going to be incredibly important.
Louis Bouchard: Yeah. And it’s even more important now that like, since COVID everything is more remote and even like you have online quizzes much more and a lot of things that you can [00:15:00] basically do with ChatGPT so there’s definitely a lot to do or like a lot to work on for current universities and educational platforms, but yeah, I wanted to, to go back on hallucinations just because it’s, to me, it’s like something very interesting and, and very important for language models.
Louis Bouchard: And I just wonder if you have an hypothesis or, or just since you are like emerged in this, in this field, what’s your opinion on. Why these hallucinations happen, like, why does it has to give an answer instead of just saying, I don’t know, like, as we humans can definitely know that we don’t know?
Paige Bailey: Well, you can sort of, again, not, not necessarily baked into the model itself, but baked into Kind of the the product built around these large language models, you can implement techniques.
Paige Bailey: So you mentioned retrieval earlier being able to do some sort of retrieval, [00:16:00] which can ground the models output into a more correct, less prone to hallucinations way. So, and and this would be kind of, you know, you ask Bard a question about, you know, how many Okay. How many types of cheeses are made in Palo Alto or something?
Paige Bailey: Or what what, was there just an earthquake in San Francisco? And then the model is able to go look through the context of a search engine, get back the top ten results, and then use those kind of as part of its prompt to generate a better response to the user. So it’s more likely. to be grounded in something factual and less likely to have hallucinations.
Paige Bailey: But one thing that I’ve been very, very interested in is this, is this concept of models kind of being able to spot when it has low confidence in answering a question and ask clarifying questions or ask follow up questions to the user to get supplemental information. So, so, you know, I could ask, [00:17:00] was there just an earthquake?
Paige Bailey: And maybe the model says, Oh, well, you know, where are you located? Or it could look up my geolocation if it has access to such data and then use that to kind of, to kind of ground its answer instead. But again, this isn’t kind of embedded logic within the model itself. It’s kind of today, it’s, it’s stitched together as a user experience built into the product around the model.
Louis Bouchard: And do you think this is… Necessary to be outside of the model. Like why can’t we make the model more, not more confident, but why can’t we make the model know when it doesn’t know, like, why is this, why is hallucination a problem? Basically, that’s like my whole question is why is it built in the model and we basically required to build things around it to help like, why cannot we fix it from within?
Paige Bailey: Well, that’s, that’s the state of [00:18:00] the world today, right? Like the, and it might not be the case in the future, but you could also consider doing things like constitutional AI approaches. Anthropic has really interesting papers on this topic, but, you know, kind of coaching the model.
Paige Bailey: That, hey, you know, you are, you are to answer if you have high competence. And if not, then you should seek supplemental information or say that you don’t know.
Louis Bouchard: Yeah. So basically change, change diverting away from only supervised training and or self supervised training and fine tuning, like trying to find other ways to further train the model to better understand its own answers.
Paige Bailey: It’s so, so constitutional AI, have you heard of, have you heard of that before?
Louis Bouchard: Yeah, it’s basically RLAIF, I assume. So like, well, it, it’s reinforcement learning with not human feedback, basically.
Paige Bailey: Well, so, so it’s, it’s also just kind [00:19:00] of, Carefully prompting the model such that you, you give it very clear directions on how to respond, um, and what not to respond to.
Paige Bailey: And, and so, so I think these, these are very, very interesting techniques again, outside of the realm of my experience. But the but you know, today much of the, many of the approaches towards model outputs are, are sort of baked into the, baked into the product around the model.
Louis Bouchard: And is it, is there something to do with like, I’m a bit changing the topic, but I wonder if there’s something to do with the model size or just to make my question simpler is like, how important is the size for the, for a language model? Like, do, are you trying at Google, are you trying to, to make it as, as big as possible? Or like, is there a potential in, in smaller models with a better product around it? Like, what’s your opinion on, for example, like smaller 7B or [00:20:00] like way bigger models?
Paige Bailey: That’s a great question. And for Palm 2, as an example, we experimented with many different model sizes. So, you know, super small on the, on, you know, like small enough to fit on a mobile device as well as, as much larger, much larger models and, and everything in between. So I, so I think we. We made public like an XS and S and M and XL, like many different model sizes.
Paige Bailey: And for each one of these, you see significant benefits and then also, and then also drawbacks, right? Like, so larger models, obviously much harder to serve. You have to think about things like distributed inference. They take much longer to train. You need much, much more data to train them. And then also things like power consumption.
Paige Bailey: Right? So, so these models can be much more powerful, but they’re also much less possible to kind of distribute to all of the users that you might have globally. But smaller [00:21:00] models are really capable of punching over their weight, especially if you do domain specific fine tuning for those smaller models.
Paige Bailey: I’ve been really excited to see the you know, the models that have been released recently from Facebook and also from Mistral. And of course, the, the great folks at Hugging Face, who are building on top of the, the foundational models that have been shared. And, you know, you can get really impressive.
Paige Bailey: performance. if You do these customizations and fine tuning for models and certainly when we deploy, when we deploy, you know, large language models everywhere around Google and workspaces and search and Bard and do at AI, like all of these various places, we can’t really optimize for, for having the largest model serving each and every user.
Paige Bailey: There just aren’t enough GPUs in the entire world in order to do that. So, so to get much more efficient, we have to experiment with smaller models customized by domain specific tasks and then deployed [00:22:00] in such a way that that they’re much more efficient. So only using a single accelerator or, you know, just a few accelerators as opposed to this, this much larger approach with distributed inference.
Louis Bouchard: If you are using smaller models like this, do you need better products around them? For example, for, if we use. Like just the, the, the Bing Chat or ChatGPT, like it’s just a chat, so you can ask it anything and it will. Basically not answer if, if it’s like a dangerous question or it like it’s self sustained, I guess.
Louis Bouchard: But if you, if you use a much smaller model where you can fine tune in, fine tune it and make it like much better to your specific applications, I guess there’s some risks in like deploying it. It’s, I guess it’s, it’s more susceptible to prompt hacking and other ways of like injection, prompt injection and, and things like that.
Louis Bouchard: And so what is [00:23:00] required to make the small models like competent enough to, to deploy them and to have them online as term of the products around it.
Paige Bailey: So, so if I could unpack your question a little bit, you’re asking kind of, you know, there, there’s a lot of work that goes into building an experience like Bard or ChatGPT or Perplexity.
Paige Bailey: What would be required to have a similar sort of experience with all of the safety checks, the responsible AI features, the efficient inference, if you were just to grab kind of an open source model off the shelf? Is that right?
Louis Bouchard: Yeah, exactly. Much better said than I did.
Paige Bailey: No, no, no, no, no. It’s a great question.
Paige Bailey: And I think the I think, you know, there is absolutely a massive amount of work that goes into building an experience like, like ChatGPT or like Bard or, or like any of these other features, there’s both any domain specific fine tuning that you might do for the model. So for in the context of co pilot.
Paige Bailey: You know, you might [00:24:00] train it or do continued pre training on additional source code. You might do things like fill in the middle techniques. You might you might experiment with RL directly within the context of the IDE. And then for, for safety features, you might decide. All right. Well, we’re not going to generate text only content.
Paige Bailey: We’re only going to generate code or we might sort of guide the model with a series of heuristics or constitutional AI approaches to not to not get involved with certain kinds of concepts are certain kinds of questions, and this is constantly evolving, right? Like, you know, I think everybody remembers when ChatGPT first came out almost a year ago.
Paige Bailey: So not just not quite a year ago, but almost a year ago, it felt like there was some sort of, some sort of security OMG that was screenshotted and put on social media every single day. But over time, the team was able to identify all of those, get them kind of added to either a list of heuristics or [00:25:00] just kind of like, Hey, model, don’t talk about this.
Paige Bailey: And, and that’s been a kind of a process that evolved over time. And the retrieval techniques, the ability to use search engines that wasn’t there on day zero, but it’s all been added and has greatly enhanced the user experience. Things like plugins, things like customized prompts. So being able to have personal profiles, Things like bar just recently gave users the ability to kind of pull in their drive files or pull in their Gmail or Docs or Sheets. And so, so all of these things are, are really what makes the, the UX for, for Bard or for ChatGPT feel more like magic. and All of that is, uh, you know, increasingly it feels like models are getting commoditized. So, so the open source model itself, not necessarily as important as how well do you know your users and can build these very nuanced, differentiated experiences around what they’re [00:26:00] trying to do.
Paige Bailey: So it’s, it’s a lot of work is, is kind of the TLDR and that doesn’t even touch on the serving aspect. Like if you if you do get a model, how do you efficiently serve it? How do you serve it at scale? How do you make sure it’s served reliably? How do you sort of build some sort of logic that if a user asks you a really tough, maybe technical question, you punt it towards a model with more parameters.
Paige Bailey: So on the order of a few dozen. And if the user asks you like, Hey, what’s up? How do I make tacos? Then maybe you can point it towards the 7B version. So, so I, I think all of this all of this product work is, is often underestimated. But it’s, but it’s why companies, you know, still need engineers, still need people to, to care about user experience.
Paige Bailey: Because that’s what gets your, that’s what gets people excited to use the thing and to, to build data flywheels that make it continuously better over time.
Louis Bouchard: And I assume optimizing those is also. [00:27:00] I, I guess like your main role as a product manager,
Paige Bailey: so, so I am a product manager currently within a research team, which is a little bit different.
Paige Bailey: So a product manager for a model. And this is a great question, by the way. But product managers for models look a little bit different. So, so some things that I would be Oh so sort of things that I would put on my task list, as an example, is Alphabet has an awful lot of PAs lots of product areas embedded within the company.
Paige Bailey: So an example of a product area might be Android or ads or or the assistant team. And so being able to interact with all of them, understand the many different use cases that they have. And then prioritizing them such that we can add that kind of pre training data such that we can add evals for each one of these cases and sort of understand which use cases are most important from a revenue [00:28:00] perspective, or maybe a user experience perspective is really, really important.
Paige Bailey: And we would need so say as an example. Maybe one PA at Alphabet cares very deeply about natural language to SQL generation or being able to migrate code from one SQL dialect to another. I would need to make sure that we have training data to support those use cases. evals such that we know how the model is performing today.
Paige Bailey: And then also incorporating similar kinds of examples into our instruction tuning data set, such when a user asks question, the model knows how to respond. So, so all of these all of these kind of activities to understand the kinds of things that people might want to do with models. That’s all baked into kind of the research PM role.
Paige Bailey: Thank you. For PMs operating in the context of a PA so like say you’re a PM within workspace, who’s working on large language models[00:29:00] integrated into sheets. If you’re a PM working in that PA, you might be thinking about, well, how do I put in effective telemetry such that I’m capturing the right user signals?
Paige Bailey: How do I think about like helpful nudges or specialized prompts such that users will get better experiences when, when asking the model questions when they’re sheets environment, those sorts of things.
Louis Bouchard: I’m just quickly interrupting this episode to remind you that if you are enjoying it, please don’t forget to subscribe or leave a five star review.
Louis Bouchard: If you are listening on a streaming platform, I also have a newsletter also called what’s AI, which is linked below. If you want to stay up to date with my projects and a lot of AI news that are clearly explained in there.
Louis Bouchard: Well, since you are in the research environment with all those new. Like the completely new generative AI subfield, how, how can you be a product manager and manage [00:30:00] the, the deadlines and just what’s like, what do you have to achieve and do when everything is so new or like, there’s constant innovation and I assume you also must innovate. So like your role must be extremely difficult or, or you need to, to be super creative. What does it look like?
Paige Bailey: Oh, I feel very blessed in the sense that I get to work with so many incredibly smart engineers, research scientists, other product humans every single day.
Paige Bailey: So, so it really is a team effort and it does feel like a treadmill in the sense that. You know, the, the model training process, so training a large foundational model on the order of, you know, months. So, so it will take, no matter how many accelerators you have, if you’re training a massive foundational model at Google scale or at OpenAI scale, it’s going to take months.
Paige Bailey: The fine tuning pieces, so doing, doing [00:31:00] instruction tuning modifications, that might happen every couple of weeks. So every couple of weeks, you have a new eval bundle. You have a new instruction tuning mixture. Those happen in a pretty frequent cadence. so For pre training you know, the process of building the model in the first place, obviously this is incredibly important to get data added.
Paige Bailey: If you missed the bus for getting your data added to the pre training mixture, you’re going to have to wait months in order to get your next shot. Or you would have to wait until the initial model completes and do continued pre training. For instruction tuning sets or for adding evals, you know, the timelines are a little bit more flexible.
Paige Bailey: Like if you don’t hit… The like one dot, whatever release, then you can at least get into the next one in a couple of weeks. But it’s felt honestly like, like running on a treadmill since December. So it, so it’s been very exhilarating. I feel like I’ve learned a ton. But it’s also it’s, it’s also something [00:32:00] that, that’s a little bit exhausting at times.
Paige Bailey: And yeah, and what’s been especially exciting to me though, Is that, you know, we certainly have innovation in the research space, um, but that innovation both from open AI from deep mind from all of these one from anthropic from all of the teams who have been building out foundational models it’s inspired businesses to start embedding research teams within them.
Paige Bailey: So, so assistant has a modeling team, you know, people who care about building instruction, tuning features, sort of learning how to deploy large language models and maintain them effectively within their business unit. And that’s awesome, right? Like, that’s, that’s sort of one of my favorite things is that Scott Guthrie, when he first joined Microsoft.
Paige Bailey: He said that he joined the Internet team because the Internet had just come out and everybody was very excited about the Internet and knew that would be [00:33:00] transformative. And so they created a business unit for it. And then after a little bit, everybody kind of realized like, wow, Internet is something that is not a business unit.
Paige Bailey: It’s just like a given that all of us are going to have to learn. And so, so I think we’re at this point now where, you know, previously, Machine learning was something that felt very slow to integrate into products. And now it feels like every, every team has realized like, Oh, well, generative AI is here and like we need to care about it just as much as we care about having like mobile support or internet for our company.
Louis Bouchard: And do you think people still need to learn about like how it works and how to train and test them or, for example, for startups or new companies, like the people that don’t have unlimited budget, will they just be hiring devs or like one dev or a few devs and just like use APIs and try to build products around them? [00:34:00] Or is it still pertinent for them to have a research team and have someone that can train, test and understand those models better?
Paige Bailey: I think it might be an evolutionary process, right? Like you could imagine if a company is just getting started with generative AI, of course, it makes sense for them to start experience experimenting with APIs and building an intuition of how the models work.
Paige Bailey: Trying out things, seeing what works, seeing what doesn’t, maybe building a collection of evals to understand how the models are working for their use cases. And then over time you know, as they’re, as they’re preferring the APIs, maybe, you know, based on these gen AI experiences that they’ve baked in as features, then suddenly their user count goes up and they’re having more requests on and their bills for for open AI or Azure or Google or whatever it is.
Paige Bailey: Start start climbing. And then at which point, you know, they realize that generative AI is now core to their business, and perhaps they should be thinking [00:35:00] about proprietary models, thinking about incorporating open source models, training them to be even better for the use cases they care about and building kind of an in house data flywheel as opposed to, you know, sending requests out and then getting responses back.
Paige Bailey: Or they might, you know, be able to incorporate. a model embedded on devices or an embedded as an extension. And that’s that’s okay. I think we’re still early enough. We’re still so new to all of these things being deployed at scale that, you know, people are going to be experimenting regularly with our APIs, right?
Paige Bailey: Should I should I care enough to have an in house team? And then also what are the trade offs or benefits of each one of these choices?
Louis Bouchard: I’ve seen a lot of recent papers use GPT 4 to build data set to fine tune smaller models. Do you think this is a good path or is this like risky to, to bias the model or?
Paige Bailey: Well, it’s certainly [00:36:00] against the terms of service if startups want to have commercial use for that model, right? Like that’s or at least if I’m remembering correctly, I think the one of the bits of guidance in the OpenAI terms of service is that you can’t generate data to be used for fine tuning and then commercialize the following model which makes sense.
Paige Bailey: But I, I think that, you know, the, we’re still seeing that, that human demonstrations like high quality human demonstrations really moved the needle for model performance. So I would, I would encourage folks to experiment with that if they haven’t yet. It, it’s really even having, you know, on the order of 500 examples, sometimes even fewer.
Paige Bailey: Can really help the model kind of hone in and focus on the, the things that you need and want it to do for your customers.
Louis Bouchard: And 500 example would be for a, a smaller model or a bigger one, like is [00:37:00] it, or is it for any of them to, to find for,
Paige Bailey: for any, honestly, like you, and, and you can see like as an example for, for some of the, the Llama models, even the smallest ones, when you do continued pre-training with code or when you, when you include lots and lots of code examples. Suddenly it gets much, much better at all of the code related benchmarks. so, so one of my, one of my hopes is that all of this, you know, large language model generative model work will help people care. much, much more about high quality data sets going forward. Right now, there’s an obsession with compute, and I agree that compute is important for the larger foundational models, but more and more, like, the, the quality of the data sets that you use for pre training and instruction tuning are becoming mission critical.
Louis Bouchard: I want to make it a bit more concrete and, and applied. So if, if someone wants to build a more specific application based on a chatbot or like just question [00:38:00] answer, for example, but specific to one domain or one subfield. So fine tuning would be relevant. What would be the, if we say it’s one person or like a small startup, how would you suggest them to, to get started and get going in terms of like.
Louis Bouchard: Finally, having a product at the end, like just all the, what would be your ideal steps? Or for example, trying with an API and then fine tuning which model using a Mistral or Llama2 or yeah, what, what would be your best suggestions for someone to get started and build an app like this?
Paige Bailey: So, so I am very biased in the sense that you know, the Mistral team, they’re, they’re from formerly from, from Meta, but also from, from DeepMind super talented folks.
Paige Bailey: Also the Hugging Face team is amazing. But there was a there was a Colab notebook released just recently showing how to do fine tuning on Colab [00:39:00] using the Mistral 7b model. Yeah, I’ve seen it. Yep, and so that is a great way to learn and to just, to just sort of build intuition. Actually, I’m going to send you the link for for one example one example that I’ve seen.
Paige Bailey: But the… But just building an intuition, even if it’s just using a single GPU on, you know, this is what the model is looking like pre finetuning. This is what it’s looking like whenever I take the time to curate, you know, a thousand examples. And this is how I would go about kind of testing and evaluating it based on these responses.
Louis Bouchard: Yeah, it makes sense. It definitely makes sense. And that’s actually how, like, I’m also working on a, on a project where we were using, well, we are currently still in the live version using an API with GPT 3. 5 and prompting it, but we are experimenting right now with smaller models and in [00:40:00] the process of fine tuning them and trying to have something that is basically much cheaper, but with similar performance, which is definitely possible to do, especially if we, if you, as you said, combine it with great products around this, like RAG and other, other things from the, the, for the UX. Exactly. Yeah. So it’s,
Paige Bailey: And there was a, I saw just recently a very, very good friend from, uh, a very, very good friend from Hugging Face just recently posted on Zephyr beta, if you’ve heard of this, um, yeah, and and, and that’s kind of a similar approach, like being able to get performance superior to GPT 3.5 . Using a much, much smaller model with a lot of care and attention put on the fine tuning data. But I think this is super exciting, right? Like, it’s, it’s [00:41:00] efficient.
Paige Bailey: It also kind of unlocks potential for, for folks that might be out of the realm of these, these larger companies that are building the foundational models, but who, who do want to invest the time in understanding their users problems and creating these really well defined data sets. And, and for places like Hugging Faces dataset marketplace, I think they’re going to get even more important over time as people realize that those are treasure troves for doing this fine tuning capability.
Louis Bouchard: Yeah, I, I find this very fascinating that like back in when I started getting into the field and learn more about AI. It was still not, like there, there was no large language models or bigger models that worked online. And I remember, like, I think it was my professors or just people in the media talking about it, saying that big companies will use AI to control people [00:42:00] even more, and it will just like, worsen the gap between rich and poor and things like that.
Louis Bouchard: Whereas, the more I see the advances and the news, the more it just seems to democratize everything and like, make… Not only AI is more accessible, but also make, for example, to me, make writing more accessible, writing in English, whereas I’m French. It’s like a bit harder for me to, to write in English or to, to do some other things and just ChatGPT or other models to really help you.
Louis Bouchard: It really democratizes a lot of, a lot of field and just allows anybody to do lots of things we couldn’t do before. It’s just so funny to see how it’s, it was, it’s completely the opposite of what people thought. Before it actually happened,
Paige Bailey: I agree. And that’s one of the, one of the biggest hopes that I have for generative models as well.
Paige Bailey: I think many of [00:43:00] us are enchanted with this concept of the diamond age and having, you know, a primer that’s kind of like a personalized teacher, a personalized tutor specifically for you and the things that you’re learning and experiencing each day. There is a, there is a great quote. recently about how the most transformative technologies unlock experiences that were previously only possible for the ultra rich.
Paige Bailey: So Uber gives you this perspective of having a chauffeur, right? Instacart is like having a personal shopper. And it’s the same with, you know, the, the sort of services that automatically choose fashion for you. It’s like having a personal stylist. And previously those are things that were, were just possible for the ultra wealthy, and now they’re possible for everyone.
Paige Bailey: I feel. You know, we’re, we’re at this point where having personalized tutors, having, you know, somebody who can coach you on speaking a new language, having somebody who can prep you [00:44:00] for an exam or prep you for, for college. Or being able to kind of be an editor for your book if you, if you want to write something.
Paige Bailey: These are, these are all experiences that previously would have cost someone hundreds of dollars in order to have. And now, you know, you could… Feasibly have them in your pocket and that’s, that’s really magical if we can view it as a teaching tool as opposed to just an answer engine. I think, I think that’s, that’s a really beautiful way to, to think about the possibilities of generative models.
Louis Bouchard: What do you think will happen with the, like the company roles or just the, the work we do in general, if, for example, even for me right now, I’m working on my startup and. I, I, I just feel like with generative models, I can pretty much almost do anything like I can, I’m doing the front end and I’m like doing stuff that I just couldn’t even imagine doing before.
Louis Bouchard: And it’s. [00:45:00] Not easy, but it’s like, I, I, I just have the, the, the mind state that like, I, I, I can achieve anything just because I have access to this. Whereas of course we could Google before and, and go on Stack Overflow and try to find a similar question with a similar answer and try to adapt it, but it was like much more work.
Louis Bouchard: And sometimes just, you get discouraged and you just give up, but I feel like it, these, these new models, these new, the new generative. AI, um, space just allows you to, to almost do anything or to learn anything at least. Yep. And so what, what do you think will happen with, I know that this is like a very far fetched question, but what, what do you think will happen with like formation of people, the university and getting into a new role?
Louis Bouchard: Do you still need credentials or do you still really need to, to know how to do something to do it? Like what, [00:46:00] is there, do you think there’s a lot of things that will change?
Paige Bailey: Yeah, it’s a great it’s a great question and I don’t think anybody has the answers yet. But, you know. I tend to be an optimist or At least I, at least I try to be. So, so one thing that I’m hopeful for in universities today are kind of these ivory tower experiences, right? Like some people get into the Ivy League schools, others don’t. Some people get into the IITs in India and some people don’t. And then that, that ends up being like some sort of credential on a pedigree that that dictates whether or not you might be able to have other experiences long term.
Paige Bailey: I hate that. Like, I think that’s very gatekeepy, and it also sort of discounts the, the creativity and the experience and the capability of people who are just as smart and just as driven, maybe even more driven, or just, or more smart but who just didn’t go to the, to the other [00:47:00] university.
Paige Bailey: So, so hopefully I, one thing that I would love To have is generative models, kind of democratizing this concept of education as well. And so university becomes less important, perhaps not important at all. Maybe it’s just something focused more on research. And, you know, as you’re, as you’re learning things, what becomes more important is what have you built?
Paige Bailey: Like, what are you excited about? What have you created? And then that’s the resume that matters. It’s not what university did you go to so, so that’s, that is one of my hopes, right? Like it’s being able to democratize the, the kinds of opportunities that people have and to encourage people to create, to publish, like maybe talk about what they’re working on, as opposed to just focusing on passing a test, getting into a school, going through the, the kind of treadmill of taking courses and then emerging with a GPA.
Paige Bailey: Before trying to enter into the workforce. [00:48:00] I also think that this is a huge opportunity to take away the more tedious parts of software development. So, so the thing that’s exciting is like having an idea and being able to create it. The things that are less exciting is like maintaining it over time or upgrading it or dealing with scaling or dealing with performance issues.
Paige Bailey: And so maybe this means that all of us can focus time on the things that we enjoy most. As opposed to, to having to, to think about the challenges. of maintaining software systems over time, or having an automated SRE that would be able to respond instead of you having to pull page or doobie. I think these are all, these are all opportunities.
Paige Bailey: And if, if what generative AI gives us is the ability to, to spend more time on what we love and to, to be able to ask more interesting questions, then, then I think that would be a really beautiful. A really beautiful outcome.
Louis Bouchard: Yeah, exactly. I completely agree for what you said [00:49:00] about school and, and the degrees and et cetera.
Louis Bouchard: I in fact just recently stopped my PhD to focus on, on my startup and just like this podcast and YouTube and everything around that. And yeah, I was just curious before talking about my, the other thing I, I agreed with you, what is your thought currently on the PhD and the master’s degree or like just university degree?
Louis Bouchard: Do you think, I know that, like, you just said that Ideally, it’s not even necessary, but do you think it is necessary right now? Or do you think it provides value you cannot get elsewhere? Or like, should anyone do graduate studies? Or is this not really relevant anymore?
Paige Bailey: So I can give page opinion, which is perhaps different than other folks opinions.
Paige Bailey: But, but my opinion about this is, you know, if you… If you drop out of a PhD program, you’re in pretty great company, right? Like Larry and [00:50:00] Sergei dropped out of PhD programs. Elon Musk, like lots of, lots of hyper successful humans you know, went into grad school and then decided for whatever reason that there was something more interesting or more exciting or more opportune to devote their time to.
Paige Bailey: The, and, and I’m personally of the same opinion, right? Like the, there’s, if, if you feel called to build something, you’re probably going to learn more by building this thing that you’re really excited about than by delaying your PhD program or, or not completing it. Yeah. Right. There, there are some folks who I think are still trying to learn how to ask interesting questions and to be methodical or empirical about how to answer them.
Paige Bailey: In which case, like going to school might be the right choice because you’re around lots of people who have spent their entire lives learning how to ask interesting questions and write thoughtfully about how to solve them. But, but I don’t think that having a PhD is a prerequisite for doing interesting work by [00:51:00] any stretch of the imagination and Even the, even the sort of generative AI labs, like, like open AI and DeepMind and Anthropic, PhDs aren’t prerequisites.
Paige Bailey: Even attending college is not a prerequisite. So, so it’s, it’s much more important what you build.
Louis Bouchard: So how would you suggest someone that is not in the programming, that, that doesn’t even know how to program yet? For example, I, I have a lot of friends that did mechanical or engineering or other Completely other fields that, and they, they just by starting working in the industry, they, they figured they didn’t really like doing what they, what they are doing.
Louis Bouchard: And I, I assume a lot of people that are even watching this right now are listening to this are in the same position. What would be your recommendation to, to start and get into the, like generative AI field where they would end up being able to develop [00:52:00] those new applications.
Paige Bailey: Yeah, so, so I also, I started programming when I was really young, but only things like text adventure games.
Paige Bailey: And I always, I was always kind of in love with it. I never considered it to be something that I could ultimately have as a job because it just felt so much fun to do. But my background in university was geophysics so like geophysics, applied math, planetary science work, that, that sort of, that sort of vein.
Paige Bailey: So computer science classes not required in the slightest. I think there was one course that that was called computational applied math, which used MATLAB, but that was like the only, the only computer science course that was mandated as part of the degree program. So it is certainly like, and it is certainly possible to enter into the domain with, with degrees outside of the realm of computer science, and most of the folks that are working even at the largest labs today have degrees in physics or mathematics or chemistry [00:53:00] or, or sometimes even philosophy which is pretty nifty, honestly.
Paige Bailey: But, but again, like, you know, you as a domain expert in whatever field you have chosen to pursue academically, whether it’s mechanical engineering, whether it’s sociology or history or whatever it might be, you are unique in that you have a lot of questions that could be potentially answered with generative AI techniques.
Paige Bailey: And so the, the first thing that I would recommend is. Test out using the Playgrounds. So, like the OpenAI Playground, Makersuite, if you’re, if you’re using Google products. Ask a lot of questions. Build intuitions about what the models get right and what they get wrong. And then most of these UIs, whether it’s from, you know, Hugging Face or the two that I just mentioned, they also give you the ability to export code to do what you just did within the UI.
Paige Bailey: So you can export Python code and put it into Colab and start running it there. [00:54:00] But, but increasingly, you know, we were just discussing that, that code is going to become less and less important. What’s more important are the kinds of questions that you can ask and the ideas you have for what to build.
Paige Bailey: And anybody can ask interesting questions. You’re actually even better positioned to answer questions in that domain if you’ve learned a whole bunch about it. So, so I think there, there’s nothing setting you back if you’re, if you’re not coming from computer science, if anything, it’s like having a superpower, if you know things about chemistry and also learn things about generative AI and how it could potentially be applied.
Louis Bouchard: And do you think it is still relevant to have some kind of theoretical background, not, not talking specifically for, from universities or from school, but have a theoretical understanding of the large language models like transformers, or just the technology behind it, or is this not relevant at all?
Paige Bailey: I think it’s useful, but it’s not all encompassing.
Paige Bailey: So, so I would say it’s useful to understand that these models were trained on a [00:55:00] large amount of, of code and text and image and video data. It’s useful to understand that when they’re, when they’re generating content, there are, you know, ways in which it could go wrong, that it’s not deterministic that you know, these outputs are things that you should check or have considerations for.
Paige Bailey: It’s useful to understand concepts like how these user feedback flywheels can impact model performance or how you can sort of stitch together retrieval techniques around the model. But these aren’t. These aren’t like, hello, here’s a whiteboard, like, please mathematically describe the way that you might do a, a, that you might implement that you might implement a transformer model in Python or explain mathematically how, how each one of these phases are passing data back and forth.
Paige Bailey: I think that’s a, That’s a little bit, perhaps, too deep for most people to need to understand. If you’re [00:56:00] building models, answer might be different. But if you’re, if you’re fine tuning existing models, I think you’re okay with just having these high level conceptual understandings.
Louis Bouchard: Yeah, I definitely agree with you.
Louis Bouchard: So I, I have just very few last questions to ask you. It’s, it’s again related to, to my audience that is mostly like beginners or students in the field. I have two questions related to this, to this topic. And I, I’ve seen that you were, you are a strong advocate for good communication between the teams and just And people in general and just documenting code.
Louis Bouchard: And I just wonder what do you do in your own team to ensure good communication or to, like, do, do you have to do anything or is this something everyone has and you just try to make it easier? Like, what do you do to improve communication?
Paige Bailey: So, so communication. Is [00:57:00] certainly is certainly something that’s a bit of a challenge to, to, to kind of instill within teams and to also encourage but definitely things like definitely things like making sure that as much as possible is documented.
Paige Bailey: There has been kind of an influx of chat rooms, I think, recently as, as folks work more and more remotely. But trying to figure out how to encourage people to, to put more of what they do into into writing, into a chat experience. There was, there was something that I, that I really, really loved.
Paige Bailey: One of my earlier teams did, which were async standups. So instead of having given that everybody was globally distributed instead of having, you know, 15 minutes every day where somebody has to dial in at 11 PM and somebody else’s dialing in at 6 AM they would just put in Slack or in chat, like, Hey, you know, here are three things that are top of mind for me today.
Paige Bailey: Three things that I did yesterday and also like, here’s [00:58:00] something like a, like a fun question. So it might be, what’s your favorite movie? And then you just have a gif of your favorite movie. But encouraging everyone on the team to do that. So it gets visibility into everyone’s work. It also helps people feel a little bit more connected in that you learn something fun about everybody.
Paige Bailey: Hopefully. Another thing. You know, encouraging these team building activities in GitHub, there was a team that I, that I worked on with this was, this was lost in tangent he was one of the, the folks leading the team but each Friday, we would have kind of a rotating playlist.
Paige Bailey: So every person would add a song. to, to a playlist that we would listen to each Friday and everybody could just listen to it or not listen to it, whatever they cared about. But it was a great way to, to again, kind of feel connected to folks, understand like how they were feeling. Because if somebody added like a really fast electronic track, then you were like, wow, they must be like feeling very busy this week.
Paige Bailey: Versus if somebody added [00:59:00] something a little bit more chill and relaxing. So it was just cool. Like these, these sorts of and these team building exercises are important, especially for remote teams, because if you feel like you can trust someone, then you’re more likely to ask a question, you’re more likely to share what you’re working on.
Paige Bailey: But, but if you don’t have these little touch points, then, then I think a lot of that gets missed.
Louis Bouchard: Would you have any recommendation for someone that like, how can you make sure that before getting hired, like if you are currently studying or learning to do some work and then you want to find a job, is there something you can do to practice your communication skills or to be sure that you will be like nice to work with and easier for your manager to work with?
Paige Bailey: Yeah, so that’s a great question. I, I certainly think writing down what you’re doing either is blog posts, um, or, or just, you know, even short blurbs on social media is [01:00:00] really useful because it can help people. But feel like they know you a little bit better and and they know what you’re working on.
Paige Bailey: I find also that communication on things like buganizer issues or GitHub issues is really really beautiful to see because then you kind of have this like front and center understanding of how people interact with each other in a workplace. Like if you file a pull request. and somebody asks you to make some changes being able to explain your logic about how you implemented initially or just saying like, oh, wow, I hadn’t considered that.
Paige Bailey: Like, I will go ahead and make the changes is really important. aNd then also, you know, we have these generative AI tools using them to, using them to help you reframe how you speak about the work that you do. So, so talking about it, you know, maybe you’re in ChatGPT or in Bard and then you say like, hey, I’m, I’m Writing or I’m trying to think about an elevator pitch.
Paige Bailey: So if I see Mark Andreessen and or whoever it might [01:01:00] be in an elevator, like, how do I convince him to invest in my startup in less than a minute? And here’s what I’m thinking I would say, then maybe the model could give you back a response. It’s like, you know, that’s great, but it’s a little bit too wordy.
Paige Bailey: Maybe condense it down a little bit more and reframe it in this way.
Louis Bouchard: You, you mentioned that it’s like. I think chat where a problem, especially like live chat, because if you, you work with someone and you, you talk a lot in the meeting, et cetera, it’s not really documented compared to. Well, clear documentation, but are you trying to push others to use generative AI to summarize meetings and to take notes for them and document for them?
Louis Bouchard: Or is that not something you are doing right now?
Paige Bailey: So, so that is a feature that I would love to have turned on for every meeting that I attend. It has not been deployed. Internally within Google, but I know that a lot of startups exist externally that are that are thinking about these, you know, very nifty ways to do meeting transcriptions and then kind of [01:02:00] consolidating all of the notes and the action items and sending them out to attendees.
Paige Bailey: And I think that’s great. I also, I also am a fan of having fewer meetings because I think that would be even better. And maybe, you know, to your earlier point, Maybe there’s a, a, a generative model that gets deployed that is capable of understanding like, oh, wow, these two people are writing code that looks super similar.
Paige Bailey: Like, maybe I should encourage the two of them to go talk about it. As opposed to like people having to know in their brains, like, wow, I should go talk to these 20 people to understand what they’re each working on. And then like. Get a response back from one and set up some time. I think, I think especially for larger organizations, a little like autonomous bot that just goes through and it’s like, wow, these two things look, look like people are working on the same thing.
Paige Bailey: Like, yeah, would be really helpful or even for GitHub projects, honestly. Like, if two libraries look like they’re tackling the same thing, then maybe it would be useful to encourage the two authors to talk to each other [01:03:00] before going off and diverging in multiple paths.
Louis Bouchard: Yeah, I guess that’s definitely something coming soon.
Louis Bouchard: It’s like, it seems super useful and very pertinent use case. I just have one last question and it’s still for the people that are getting into the field. And other than communication, which is. It’s an extremely important skill, especially right now with like communicating with generative AI is also super relevant.
Louis Bouchard: But other than communication, which skill do you think is worth practicing or is the best to work on right now? If you want to be MLOps or like someone that is working and deploying those models, like what skill should they be working on or trying to improve as much as possible?
Paige Bailey: I still feel like the ability to ask really great questions differentiates people who are capable of being kind of successful and resilient through, through all of these technological changes versus the people who [01:04:00] feel a little bit left behind because learning, learning how to be curious and ask great questions is something that you can cultivate.
Paige Bailey: It’s just a matter of, Uh, you know, kind of looking around, seeing what’s interesting, seeing what maybe isn’t present in the data and digging in deeper to understand how you might be able to close those gaps or address those questions. Exploration games are really, really great for this, I think because the only way that you can advance through the exploration games is by turning over every rock and like, you know, asking every, asking every NPC, you know, questions about gameplay.
Louis Bouchard: So they should be gaming to, to find their next job.
Paige Bailey: Well, I don’t know about that, but I do know that exploration games are exploration games I’ve, I felt like have helped me. Build this, this concept of, of being curious and asking better questions.
Louis Bouchard: Yeah. It’s just like me when I was younger, I [01:05:00] played a lot French video game.
Louis Bouchard: And there was basically a marketplace where you could make money by selling and trading and. I, I just feel like it, it taught me a lot compared to some of some other friends that didn’t play this game. It taught me a lot to like better manage my money and just budget and like save money to, to know that if I save now, I can have more money later on, which is like very basic concepts.
Louis Bouchard: But I guess as a kid or as a young person, a lot of them don’t really are not really not aware, but they. like they spend without thinking really, and I feel like this video game really helped me. So there’s, there’s definitely, there’s definitely something in, in video games to, to, to be better at life in general.
Paige Bailey: Yeah, I agree. I felt the same about the simulation games. So things like Sim Park and Sim Safari and The Sims, they were, they were really great [01:06:00] at sort of helping Understand how systems work and then also, you know, if I introduce a whole bunch of rabbits, then ultimately like all of the, all of the like vegetation will go away.
Paige Bailey: So you should probably bring in some natural predators, those sorts of things.
Louis Bouchard: Yeah, it’s really cool. Awesome. Well. Thank you very much for coming and I just would like to know if you, if you have anything you’d like to share or where people can find you, anything you want to say to the audience, you are, you are free to go.
Paige Bailey: Awesome. So so super grateful to be on your show. This has been very fun way to spend a Saturday morning. And the thing that I would recommend folks to go check out. is all of the wonderful features, machine learning features that have been recently added into Colab. I, I think they, if people are using Colab and hopefully are it’s, it’s really, really nice to have this tooling baked [01:07:00] directly within the IDE.
Paige Bailey: And then of course, I am dynamicwebpaige everywhere on the internet. Always feel free to reach out. And yeah, that’s, that’s about it.
Louis Bouchard: Awesome. Thank you very much for your time once again.
Paige Bailey: Thank you.