The Human Layer

Beyond AI's Blackbox: Building Technology That Serves Humanity

cstreet Season 1 Episode 5

Ever wonder if there's a better way to build AI? One that doesn't rely on scraping the internet without consent or encouraging dependency on opaque systems? Beth Rudden, CEO of Bast AI, offers a refreshing perspective that challenges everything we've come to accept about artificial intelligence.

At the heart of Beth's approach lies a fundamental truth: understanding is a labor, not an act. Unlike mainstream AI systems that statistically brute-force syntax without context, Bast AI grounds information in ontologies and graph models, providing the necessary scaffolding for genuine comprehension. The result? 

Deterministic systems that never hallucinate or go off-script while still leveraging generative capabilities for appropriate applications.

Beth draws a crucial distinction between what AI and humans do best - machines excel at sensing patterns across vast datasets, while humans determine which patterns actually matter. This insight leads to a compelling vision for the future: local libraries housing community-specific language models, with librarians serving as ethical stewards of information. Such systems could revitalize news deserts, enhance emergency response, and preserve cultural context in ways centralized models simply cannot.

The conversation ventures beyond technical discussions into profound territory - challenging our relationship with technology itself. When Beth says, "AI is great if you're already wise," she invites us to consider how intentional we are with these powerful tools. 

  • Are we asking meaningful questions or simply seeking shortcuts? 
  • Are we curating quality inputs or accepting whatever data corporations feed into their algorithms?


Ready to reimagine what AI could be? Join this thought-provoking discussion about creating technology that truly serves humanity rather than merely extracting value from it.

---

The Human Layer is produced by DesertRat Productions a boutique narrative studio architecting knowledge systems for the next world. We executive produce podcasts, community sanctuaries, and cultural frameworks that preserve wisdom, catalyze emergence, and compost empire into something regenerative.

Visit our website to explore collaborations or participate in our mission.

Speaker 1:

Cool. Hello everyone, Thank you for joining us. This is the Human Layer and we are going to dive into some of the depths of AI today and we're very excited to have Beth with us. And, Beth, if you would like to go ahead and introduce yourself and get us started.

Speaker 2:

Hi, I'm Beth Redden, CEO and Chair for Bast AI, and we build explainable AI software.

Speaker 3:

Build explainable AI software and part of what makes us very, very different is we really want to understand how human beings want to receive data, and I've had the privilege of getting a little bit under the hood of and it was probably honestly like some early chats with Beth, and Bast really probably was the first like, oh, I need to think about this a little differently. This is, you know, hadn't really done a deep dive. We're now deep down these rabbit holes of knowledge gardens and co-creating with AI as a companion. But yeah, I think for a lot of people that hear, at least, I think this extends.

Speaker 3:

But what happened with me was like there is a different way and these can be tools. We know they're powerful, they matter, they're here, they're not going anywhere, but we have work to do to make sure that they're actually serving people and they are explainable and ethical. So I'd love the origins. I mean, we can go way too many directions. I even love the brand because I got a little into like Egyptian mythology and all these, like there's a lot of layers here, but the human that is Beth I've only come to love more over time because you really were an entry for me into just thinking a little differently about what AI can be.

Speaker 2:

So we specialize in natural language understanding, and for anything to have understanding, it requires friction. It's a labor, not an act, and so what we do is we take really good, high quality, sourced information think like protocols for medics, you know, things that have been used over and over again, where people have memorized verbatim and we ground that into an ontology or a graph model so that the AI has the scaffolding to understand the context or understand against, and that's something that you know we were doing, you know, many, many years ago. But when generative AI came out, or NLG, this is something where it was like a rocket for us because we're like, oh hey, we can generate any variation we want. Back in the day, you would have IT groups who would say here's 10 questions, give 100 variations of these 10 questions, give 100 variations of these 10 questions, give 100 variations of these 10 answers. Now we can do that with true generative AI, and what we do differently is we make the AI understand the context by which the user is asking the question and through that grounding of context, we have deterministic AI where we can you know truly grounding of context. We have deterministic AI where we can you know truly.

Speaker 2:

We never go off script and we have no hallucinations, and we can also use the probabilistic in order to say, oh, hey, would you like an analogy of that? Or oh, would you like to explain it to your five-year-old. So there's good uses of artificial intelligence that I think are incredible. I use it every single day, and lately I've been using it as a creative partner, because I do a lot of keynotes, and one of the things that always has bothered me is attribution, and when you're telling stories, you're typically telling your friends' stories, but the jokes always work better in the first person. So now I do a keynote, um, I record the keynote using a transcript editor where, and then I produce a blog and then people can interact with the transcript of the keynote and then the blog contains all the attribution. So I'm spending more time, but I am producing a higher quality output faster than humanly possible.

Speaker 3:

So this it's maybe because I'm primed. We had a chat with Spencer Klinomatic, who's likely to be the previous episode. I'm, I guess, curious about how you think about the, the on the, the, what feels to be an insurmountable like hurdle up against the major models and the frontier models that are going to continue to progress under like current, you know, trends. Is it a it a? Is it a? Is it a mental model problem? Is it an education Like it feels obviously so right and the right people that like get exposed to it and see that it's just as powerful. Um, I think it get. You know, it's easy to get bought in pretty quick. How do we cross the chasm, I guess? Or how do you think about, like competing with? You know the numbers right now that say, everybody's just willing to go use ChatGPT in the free model or whatever it is, or pay $20 or $40? I don't know.

Speaker 2:

This is more personal and selfish. I think the car comes with features like a windshield and a steering wheel and a place to put the gas in the car. Right? When I first opened Bast, I was like, hey, we need windshields and steering wheels and a place to put in the gas, you know, a place to put in your data, a place to process your data, a place to see where you're going and a place to steer, and none of those things are part of ChatGPT. You can completely put in as many questions as you want, but you have no idea how your questions are actually impacting the model.

Speaker 3:

You'd like to believe that we have this discussion.

Speaker 1:

Yeah, you're tapping into it, but you really don't know that they're. We have this discussion, yeah, this is you're tapping into? It? You really don't know.

Speaker 2:

Yeah, no, you you have no idea and you know people like me are like wait a second, that car needs a steering wheel, a windshield and a place to add gas or electricity per your choice. But I I think that, um, we're just coming up on that, and the more that I think about like Crossing the Chasm and Jeffrey Moore's book, the more I think it's not about markets, it's about minds and it's a cognitive chasm where we are now in a position to correlate new information to people's existing mental model and reduce their period of equilibrium or disequilibrium, so we can make people learn faster. I learned faster.

Speaker 1:

That's what I've noticed. There's a cognitive thing happening where I'm learning much, much faster because I'm able to marry ancient wisdom, which is what I tap into, and then merge it with modern technical scenarios. Then merge it with modern technical scenarios. And what I'm wondering? Like we've both been working with different models for two years now and I use Chat, gbt and Claude, both the $20 versions, so I've trained them enough to know how I think and I know enough, as I'm doing like more therapeutic prompting, and I'm asking it to pull, you know, address this modern situation from this lineage of thought and this lineage of thought and then give me the output. And the outputs are very good and I'm curious.

Speaker 1:

I'm like, is it pulling from what I've given it two years ago? Is it pulling from some random thing over here that I know nothing about? And then I end up trying to unpack. It's still very valuable output. What would it look like if we created our own model? And what would it be if we, like, trained our model this way to have this much wisdom come in and be able to do this type of output over here? And I feel like that's the next step for a lot of us that are power users of AI. Yeah, it's that.

Speaker 2:

And the. You know, there's a lot of people there's a Ouroboros reference here where they feed the entire tail of one chat into the mouth or the head of another and, at the end of the day, it's generative AI. So it's a large ball of statistics based on unknown, unconsented data sources, and it's very frustrating for people like me, because they could have chosen the Library of Congress, they could have chosen ACEs or Adverse Childhood Experiences, the, you know all kinds of things that are good sources that are actually open data. But they didn't. They chose Reddit. They chose, you know, chose large swaths of OnlyFans. They chose huge amounts and volumes of data that really reflect the choices of those very first engineers.

Speaker 2:

And what I see in the future is people like you can curate your own inputs. And when you curate your own inputs, it's a very different experience and you are probably there is a state being added or memory being added, and I think you guys have seen that in chat GPT, where it says memory updated and then you can like delete your memory or you can kind of change your memory. When chat GPT came out, they did not have that, and now how they're having to add that. They're like, oh crap, carrying data in a persistent state is very difficult, and so I like to use this analogy that artificial intelligence is really great at sensing patterns over large volumes of data, and human beings are incredibly good at defining which ones of those patterns actually matter. So we are specific.

Speaker 2:

Ai is sensitive, and so recently I came up with this and found this amazing prompt that does some. Really came up with this and found this amazing prompt that does some really really interesting things, and you can kind of give archetypes different authors, and I didn't like the authors that this prompt came with, so I wanted to change the authors. So I said AI, here is a picture of my bookshelf. Pull out the authors that you think should go to like the builder archetype, or like the architect archetype, or like the architect archetype, or like the therapeutic archetype or whatever it was.

Speaker 3:

Oh, the bookshelf as the. I've never thought about that. It's super interesting yeah.

Speaker 2:

Well, what it did and I actually I use this in my keynotes, I have a picture of this bookshelf and it pulled out all of the wrong books and it had no understanding of which specific patterns matter, but it was able to read. The patterns of this book was letters of Carl Sandburg or whatever, but it ignored Shakespeare. It ignored my own book, you know. So it doesn't have that sensitivity that is specific to a human being and a human being and this is where AI is so good at, you know, creating that 80% that's already been created and making that go faster than humanly possible. But it requires a human being to use artificial intelligence, like you guys are, to create something new, and AI is excellent if you're already wise and that's the razor's edge, and I feel like we are probably about saying we haven't even talked about this.

Speaker 1:

We're probably about six months out to where we're gonna have to probably build our own models for this very thing we can let you do that now.

Speaker 2:

Yeah, yeah, absolutely, I mean, and we can provide the space where many people who are building their own assistants, they get to the point where, hey, I want this assistant to open with a greeting like this, or to exit like that, or to have a manifest that it was built for this purpose of that. And that's part of what we know how to do, because it's that combination of deterministic and probabilistic, and then using the understanding of what questions your users or you are asking it, and then priming it with which ones matter.

Speaker 3:

Could we have gotten to this point? We have gotten to this point. I'm playing devil's advocate a little bit, but, like, is there some value in where we landed with taking in Reddit and the full shitification of the Internet to land us at a point that then I don't know how you think about it, but I'm like I do and this is coming from someone who is not nearly as technical, but I do know there was something interesting that happened because of just the volume of data. So, like, is there some value that we like kind of had to go through this period to then at least and you've been working with these things for decades Like and I can see you're like no, it's, it was incredibly lazy.

Speaker 2:

Yeah, and you know the hype. You know this was hype as a service, 100% front to back. Look at who Sam.

Speaker 3:

Altman is and what he does and how he raises money and the type of leadership that you know he has created. And it's funny I worked with the Library of Congress for a decade. It's not like we're short on data. I know the volume that that represents decade. It's not like we're short on data.

Speaker 2:

I know the volume that that represents. But the problem with the Library of Congress is that you would have had to give it scaffolding to understand the taxonomical references and the structures and that's very hard versus just sucking in a bunch of data. And what they statistically brute, forced, is syntax and understanding the nature of English, because they understood the nature of language. From a syntactical point of view, it doesn't have an understanding of the meaning of the word, because understanding is a labor. It requires friction, it requires scaffolding, it requires you to do some work to understand the context, to understand the meaning of the word. So if you divorce all the data from its context, then you are, in some ways, you're like asking it to like jump up and down on no feet.

Speaker 2:

It's really really crazy from that standpoint, watching this happen in 2012, 2013, where people were not looking at data and they realized that they could have the machine learning algorithm guess the pattern, and so they were just flowing data over there's about 440 statistical algorithms and they're like hey, look, I got a really high objective output and I'm like does the data have a signal? They're like what? And look, I got a really high objective output and I'm like does the data have a signal. They're like what? And I'll go what's your variance and standard deviation of your data? Like you know. What's your proxy for Bayesian error? What kind of target are you trying to hit? You know. So if you don't know what you're trying to do, the AI will absolutely guess a pattern. And you know within language. We carry so much signal and metadata within language.

Speaker 3:

Across cultures, across borders. Yeah, yeah.

Speaker 2:

I mean, and you know etymology and you know so now we have this amazing tool, but who is deciding what outputs or what questions matter?

Speaker 2:

Yeah, what inputs matter for these tools, and there's not enough.

Speaker 2:

People Like, if we want a, you know, representation of humanity, we need eight or 2 billion people, 20% representation of 8 billion generating these, these models, or generating the curation of the data for these models, and then carrying the reference of where they got the data with consent. When you take data without consent, you know, it's like one of my favorite stories I tell is about my daughter who had to get a form from a doctor to say that she was physically fit for a thing at school, and I was really pissed off at her and so we went to like a dock in the box in the morning and she got to skip a little bit of school to go to the dock in the box, and so I made her fill out the form. We get back into the room with the doctor and the doctor's like so you have generalized anxiety and you broke your knee. I was like Molly, what did you do? She's like well, I have anxiety generally and my knee hurts, and so now the record shows my daughter has generalized anxiety and broke her knee.

Speaker 3:

And the context matters, wow it's it's why I I am only growing more and more uh to context over time, which is why I've always loved chatting with Beth, because I know that that is your life is pointing out how much context matters.

Speaker 2:

Data is a flow, it's not an event. And we keep thinking that we can freeze time and say you know, this is an artifact of a human being's experience. I know everything about that person. You don't, you know nothing. And this is where the choices that people are making about the weights and measures. We opened up LAMA because it was open source open source and we could see that it was weighted to or biased, skewed to use contractions, and that is a explicit thing that somebody made a choice to make the system more, more apt to hit somebody's personification or anthropomorphism, which is part of our paleolithic human body. And so where I'm like, we have no defenses over this, none. So what do we do? Like, how do I get the word out that you know? Just like? Do you remember in like grade school you would see the sun shining with a smiley face and all of that. You know that's a cute little personification. Now people are building relationships with this machine that is not understanding the meaning of the human being's words.

Speaker 1:

So where do you see it going in five years? Because we're definitely, I feel like we're past the tipping point and there's so much adoption of this tech that how do we? You know, we saw this with Web2. The Broligarcs just did everything based on capitalism, instead of hey, maybe we should look at how this impacts humanity and let's maybe be more mindful. And we already know that the cat's out of the bag with AI. So if you could wave a magic wand, how would you like it to look in five years with society using AI?

Speaker 2:

as a very powerful tool to do things we couldn't do 10 years ago with society, using AI as a very powerful tool to do things we couldn't do 10 years ago. I think every library should have a language model, you know, introducing that locality's language. You know, this is the Bronx. You know this is the. This is Harlem, this is Denver. I think every librarian should be the steward of what goes into that language model, and it should be a utility, and that utility is something that expands that particular locale's culture. The Maori do this. They created a language model for their language and only invite people in who they want to, which I think is absolutely the best, because when you're giving somebody your story, you're literally giving them to the Maori. You're giving them something that they then carry with them.

Speaker 3:

When you say they, I don't know, I mean I've been to New Zealand and I understand what a rich culture can ultimately which almost is like off-putting and frightening if you're not exposed to it often but like what? Does that mean that they have done that as a, so they have a?

Speaker 2:

api that they invite people to access, that carries the mayori language in a language model that they created on their servers with, with their language, with their curated teams, with their elders. That, um is, it's fantastic. They have an entire data rights. I mean it's a beautiful model, but that's what it would look like is librarians. It's part of our rich texture, of who we are and the culture of where we live and where we persist.

Speaker 3:

The network exists. Libraries can be the islands, the New Zealands of more complex societies.

Speaker 2:

Librarians. Yeah, librarians are already public servants who are dealing with people who are experiencing homelessness. They are fighting battles on the front line every single day. They already curate what information goes into a local culture, and I obviously am incredibly biased with this answer, but I work with librarians a lot and I think that you know, as information stewards, they are trained to understand information and context and then trained to do.

Speaker 2:

The most magical thing in the world is somebody comes through the door and says I need a book on Israeli cooking. Is that in the Israeli section or the cooking section? You know? So it's like you have to understand the question under the question what does a person really want, or what does a person really need, based on you know what they're actually asking for and you know. This is where I think conversational systems are so fascinating, because when you get into conversation analysis and understand true conversation analysis, it's about power and power structures and how people are interoperating with language to ask a question or to get a response, or to emoji a response in a way that makes people smile.

Speaker 3:

That's an insider, we can unpack it, thank you.

Speaker 2:

Yeah, I'm like sofa king.

Speaker 3:

Sofa crown as emojis. Wonderful, nice. I've gotten used to using a bowl of soup. Fa Show S-H-O Wow.

Speaker 1:

Fa show. That's a nerdy shit right there, wonderful I love it.

Speaker 2:

That's fantastic. But yeah, I mean, but this is what you can do with language. Once you raise the you know, raise the curiosity.

Speaker 1:

I feel like we've landed in knowledge garden conversation, like that is the bridge, if you put so, knowledge gardens. Actually that's the conversation we just had before. Tonight, before you talk, we're gonna spencer's gonna present, um, some stuff about knowledge gardens, but it feels like on a cosmo, local level you could have librarians running these knowledge gardens, that you also have technologists coming in and journalists, yeah it could be a complete collaboration, and then we put the ai model into that.

Speaker 1:

So then you have the knowledge that's front facing, but then you also have the artificial intelligence within it that is controlled by these knowledge garden architects and librarians, and then that is a reflection that's very clear of the locale. And then you begin connecting knowledge gardens through decentralization, through different types of blockchains, and then you can actually just kind of put the whole thing out there.

Speaker 2:

Tell me more about journalism. Is that like for the dialectics, to you know, ask questions, or like why, why journalist?

Speaker 3:

oh, I only said that, knowing that we've. I think there's a journal DAO there's a collective that feels uneasy about what the future of the fourth estate and like what that profession ultimately means or does. So it just seems like an important layer, but in combination with you know, yeah, librarians, I don't know you can probably architect these. The physical network of libraries to me is what's really interesting. We don't have to redo that. These things exist.

Speaker 2:

When you said you worked with the Library of Congress, you know you can unpack that in ways that most people have no understanding of the level of taxonomical references and information systems, like I mean it is.

Speaker 3:

You want the nerds. Of nerds I mean in crypto. We think we know nerds.

Speaker 2:

Uh-uh, no, they invented it. That's awesome yeah.

Speaker 1:

And I feel like this is. We talked about this in the last episode. News deserts are something we focused on at JournoDAO, which of like food deserts you have areas that have no more local news.

Speaker 1:

The local news has either been bought by a mcclatchy or a large conglomerate, or it was bought and shut down so there's no one reporting locally and in like. One of our members is in rural new mexico. When there's a wildfire or an issue, the only way he can get access to information is through, you know, ham radio or for through the local emergency services, and normally that would also be the role of the local reporter would be putting that information out so people could make decisions in real time based on you know their what's happening around them, so that news desert really does have an impact on a community.

Speaker 1:

You also lose a connection with what your politicians are doing. If there's no one at the school board meeting or the city council meeting reporting what these people were talking about, you have no way to filter the information. So but all of these communities pretty much still have a library. So actually, if you connect the library and the knowledge garden with the local journalists that are still looking for the way to create produce and give that information out, and put all of that together, that actually is a viable model that could replace some of these communities with no information sources at all yeah, and I think there's just a need for like re-inspiring those that you know, and I'm sure there's a lot of librarians.

Speaker 3:

They've always been at the front lines and like dealt with the truth of humanity in that you know whether it's drug addiction, all of it. They totally do. They are next in line after social workers.

Speaker 2:

Well, and fire chiefs and fire stations those are our typical volunteer, especially in the smaller communities. So your paramedics that are attached to that too are. These are the frontline workers. And when I started BAS, that was my thesis is I'm like I want to give these people.

Speaker 1:

AI. So how far in that process are you? And you don't have to get super detailed, but or announce anything that's exciting, because it sounds like.

Speaker 2:

We're going to do a press release soon, but we really wanted. I started with medics in theater, theater, like medics in war zones, mainly because I had the opportunity to. And then, um, my, uh, one of my very first clients was like, hey, don't you, you know, can we do this on device and operate without internet connectivity? I'm like, yes, let me show how small this can get. And this is back in 23, before deep seek, mind you, because the technology exists. If you know how to curate good, high quality data, why do you need large volumes of compute? You know, why can't you upload what you need? And then you know you have it on device and then you go back to the fob and like you upload.

Speaker 2:

You know a different, you know something that you need, and the idea that we wanted to always start with is to reduce the cognitive load of somebody. And if you are a paramedic or a frontline worker, you are trained on a protocol verbatim. Paramedic or a frontline worker, you are trained on a protocol verbatim. And so we had to say no variation, no hallucination. Here is the picture of the page of the protocol. We found out the hard way that a lot of the medics are like look, we got the procedures down. What we need to know is like, how do you, how do you put the stretcher in the Lakota versus the Blackhawk? So we're like, oh okay, great, user research win.

Speaker 2:

So you know, a lot of this stuff is like, how do I give people like a disaster response first aid kit? You know that has local language. You know where your first aid information is, where, like, the centers are, so that you can get community organization in place. You know there's so many you know you were talking about like ham radios, like the centers are, so that you can get community organization in place. You know there's so many you know you were talking about like ham radios.

Speaker 2:

To me, that's where I see everybody's going to have their own version of this. Don't we want that to be using really good, high quality data that's already been tested, that has, you know, referenceability, that has some way that it can be easily updated. And you know, I think that with the disintegration of education, we need something that gives us, like you know, some national understanding and some understanding of the patchwork of America where we have so many different, wonderful, beautiful cultures. You know going on like how do we, how do we combine that into into a knowledge base that is organic that's a good question.

Speaker 1:

I wish there was an answer to that, because that's a really good one.

Speaker 3:

I mean the. The answer is like yeah, keep keep fighting. The good fight. I don't know, I you know.

Speaker 2:

I I always am like hey look, dna has like four characters. That's like double your storage volume.

Speaker 2:

So we, you know like, think about, like you know using trees as storage for data and you know storing.

Speaker 2:

You know storing things that you could then, you know, have conversations with, and I know, I mean, I've seen some of this technology.

Speaker 2:

They actually used it in Potsdam, on East Berlin, where that's where the Matrix 4 was filmed in Babelsberg Studio, and part of that is that they can take you know so many different data points of a human being that they can recreate you know how, how a human being moves in their, their physical body, as well as recreate what they would say, so you can bring back the dead.

Speaker 2:

And what if that was stored in a tree? Because part of the problem with that is they could only store like an hour's worth of data because they had the entire walls, ceilings, everything lined with hard drives, because it was so much data that they had to store. And it's fascinating, like this whole. But where we could go in the future is we could have libraries and we do need to figure out you know how to store information better, and that's part of what I really think about. A lot is like what is the least amount of information that I need to infer the most in order to be able to, you know, classically go back to like a biomimetic response of like how do I, how do I take the least amount and give the most?

Speaker 3:

that's not necessarily. Uh, I mean yeah, the, the, the hardware, I mean up against stargate or like there are things happening at scale that like, just so, how to I don't know the math on like how that equates to a highly compressed you know, look trees, look to what DNA represents at a local level across a library network. Are we able to have that fight in a distributed manner? Maybe?

Speaker 2:

Yeah, maybe, but I mean, let's just find out how photosynthesis works and figure out how nature does it. We still don't know anything. And you know when you are talking about like large compute and you know thinking about, you know what are the really amazing things that you can do. You can do protein folding at scale. You can look for new galaxies. You can do protein folding at scale. You can look for new galaxies. And so if you're in that unknown unknown I'm a fan of, like you know, using the compute power to do something like that. I am not a fan of using that to like reestablish your grinder thread. No, no, that's a great point.

Speaker 1:

Yeah, yeah, my brain just went into all the ways people are using AI right now Because we use it in a very specific way and I don't really go outside of that, but then because it is kind of terrifying, I'm like there are so many ways this could be used for really dark, dark things or just very banal things, which are not necessarily a bad thing. I think we all kind of need distraction in modern society a little bit, just because it's so chaotic. But yeah, that's a fascinating one to unpack.

Speaker 3:

What's left or untold or things you want to point people towards, or we can keep going. I mean, I think we're probably up against the last five to ten minutes maybe, but we can also just keep going into the crazy and esoteric, which is up to you.

Speaker 2:

I really a frame that I've been really trying to get people to think about is cognitive science, is a binding of philosophy, psychology and computer science and the work that I'm doing with ontologies. Ontology is the study in philosophy of the nature of your reality based on the language you use, and there is so much to be done there. And when you talk to like my people, like graph theory experts and you know, game theory, like it's fascinating, vector space is just one of those, like you know, one of those things that really it totally blew my mind, like completely, and that I feel like I'm not saying that I have that solved with ontologies and graph models and grounding. But there's this huge aspect of psychology and again I'm going to use the word game theory and I'm going to point back to transactional analysis and psychotherapy and Eric Byrne, and I'm'm horrified reading this book that he wrote about game theory because it is awfully it is a sign of his time, which is the 50s and the 60s Sexist, racist, very, very deterministic.

Speaker 2:

As far as alcoholism isn't a disease like all the things that we know now, as far as, like, alcoholism isn't a disease like you know all the things that we know now, but the games that he's talking about are all from the abnormal psychology world and I think that people don't understand that they've been codified into a lot of the systems and so if we unpack that psychology kind of I don't know what to call it channel of cognitive science and bring that into modern day neuroscience like that's where I want people to go yeah, and trauma, informed language and positive psychology and like all of the things, nonviolent communication, which is my favorite.

Speaker 1:

That's actually what we just stumbled into using it that way. I didn't mention this earlier, but I, before I got into some of this tech work, I got a degree from Naropa university in yoga. So I am steep in like obscure yoga therapy, but also eco psychology and and a lot of this stuff, and I'd studied psychology in the past as well, so it's kind of fascinating to put that through the model.

Speaker 1:

And like here's a scenario I just saw in a boardroom. Explain it to me, like Joanna Macy might explain it to me. Explain it to me like you know, adrian, or Adrienne Marie Brown, like how would she explain it to me?

Speaker 2:

You know the names. Yes, you are curating the knowledge and putting it in a frame. When you watch younger people use AI, they are asking questions like how do I get rich without working hard?

Speaker 1:

That's the terrifying part.

Speaker 2:

Yes, but it's not giving them something that they can really act on. Yeah, so AI is great if you are already wise.

Speaker 1:

Which I feel like that's why we're going to evolve into our own model, is because I want that model to be trained on this, this, this and this, so that when someone young that I am mentoring or we are working with in whatever capacity, comes to it, we know the guardrails are in place. It's only going to pull from this lineage of yoga or psychotherapy, or we even got into neuroplasticity recently, just spitballing some stuff back and forth and that is. I'm going to spend all summer unpacking that, because that is mind boggling.

Speaker 2:

And I want you definitely somebody like your mind to be working with an AI system that actually is storing what you're asking. Oh, that'd be great so you can control and feed it back. So we prompt to learn to learn to prompt you know, we, you know. Question to learn, learn to question. This is dialectics like 101. And how do we get that into the hands of everybody who is already wise so that they can?

Speaker 2:

One of the things that I told the librarians is I was like set up a thirst trap for people to come in and I'm like use like a Zen cone, which is a old, you know thought puzzle. Like if you see a boodle on the road, kill him. If you see that on a screen, you're going to come into a library and go, what the heck is that? That's a thirst trap. That's a professional thirst trap. There's a lot of amateur ones out there. So it's like how do we do that so that we can give people the understanding of what people are asking to the ai, so we can learn how to be able to get them to use an ai that is curated against the sources that we know are good?

Speaker 3:

That's why I was excited about this, knowing that there was threads that I hadn't fully connected. But, for better or worse, this is how I operate. There is something in this substructure space that I haven't quite fully mapped but is going to matter. So, yeah, it's super interesting to dive in deeper into what bast ultimately represents and knowing your path to get to this point and like why right now matters so much to get this right but also connecting dots and direct work that we're doing with the human layer and many other things, like we can start to assemble these. You know there's a, there's a. You know those that appreciate sort of modularity and it should be composed yeah, and composability right.

Speaker 3:

This is the the beauty of the best parts of crypto and web3 um but yeah yeah, we have work to do I feel like this is a to be continued.

Speaker 1:

Closing like this is a conversation that will be ongoing for a long time, which is amazing.

Speaker 2:

We have much to solve. Oh, my God, we really do. Thank you for having me. This is lovely.

Speaker 1:

Yeah, thank you so much for joining us and again, this is the start of the conversation for sure, Thank you. Thank you so much.

Speaker 2:

That's great. Thanks, Beth All right.

People on this episode