Human 2.0: The Cyborg Revolution Is Here

Kevin Warwick

We’re delving into the world of cybernetics and the potential cyberpunk future with a true pioneer and a living legend in the field, Kevin Warwick. We explore Kevin’s history of pushing the boundaries of human potential by integrating technology with biology, and what the future holds.

THU 1106 Guest Image

In this Episode of The Human Upgrade™...

Today, we delve deep into the world of cybernetics and the potential cyberpunk future with a true pioneer and a living legend in the field, Kevin Warwick. As an Emeritus professor at Coventry and Reading Universities, Kevin is celebrated as the world’s first cyborg, earning him the moniker: Captain Cyborg.

In this episode, we explore the incredible journey of Kevin Warwick, who became a cyborg over 25 years ago by implanting a chip in his arm, the size of a coin, enabling him to interact with the digital world in ways that were once unimaginable. But this isn’t just about becoming a cyborg; it’s about the potential, the risks, and the opportunities at the intersection of AI, robotics, and biomedical engineering. In doing so, we discuss the fascinating implications of merging humans with technology and how it can reshape our understanding of reality.

Kevin’s fearless exploration into the world of cybernetics is truly pioneering. He has been willing to take risks to expand our knowledge and possibilities as humans. Join us as we explore the profound implications of merging biology with technology, opening up new realms of human potential and perception, and what our future could hold as technology and research continues to progress.

“The scientist I am, and was, is somebody who tries to push the boundaries a little bit.”


(00:01:43) The Realities of Pioneering Human Enhancement

  • The paradoxes involved in the evolution of humanity through innovative research 
  • The ethics of building technology that you know could be used for good or bad
  • Is Kevin still wearing the implant?
  • The shortage of accountability for those who do bad things with technology in government

(00:10:05) Kevin’s Implantation Experiment & Research 

  • What happened when he connected his nervous system to the internet 
  • Using BrainGate electrodes
  • The lack of progression in neuroscience since his experiment 
  • What it felt like to inject a current into his nervous system

(00:18:10) Expanding Our Biological “Software” 

  • Bruce Sterling cyberpunk books 
  • Musing on potential neurological upgrades like a digital compass or tongue printer
  • Experimenting with an infrared brain stimulator 

(00:24:39) How Far Is Too Far? Risk Tolerance for the Sake of Science 

  • Hacking the communications network between cells and voltage-gated calcium channels
  • Concerns with EMFs and implants
  • Taking risks in order to further science 
  • Considering the risks of not progressing the science 

(00:32:47) Exploring Human Enhancement Possibilities

  • Human enhancement using electricity
  • P300D, the EEG measure: a lag time on reality
  • Possibilities and potential risks for enhancing human communication

(00:39:21) Kevin’s Take on Longevity From His Brain Cell Research

  • The key to longevity according to Kevin
  • Experimenting with brain cells to influence longevity
  • The heart-body-mind-gut connection in the brain 

(00:43:13) Biohacking As An Entry Level Point Into Cyborgs

(00:52:41) The Future of Cyborgs, Humans & Artificial Intelligence

  • How far will AI go in the future?
  • Possible dangers dangerous in the field of AI
  • Cyborg possibilities in 25 years
  • Meeting Jeff Bezos at the World Economic Forum
  • Augmented filtering with AI and creating antivirus software for our minds
  • Metaphysical applications for exploring the universe
  • Opportunities for creating brain-computer interfaces

Enjoy the show!

LISTEN: “Follow” or “subscribe” to The Human Upgrade™ with Dave Asprey on your favorite podcast platform.

REVIEW: Go to Apple Podcasts at daveasprey.com/apple and leave a (hopefully) 5-star rating and a creative review.

FEEDBACK: Got a comment, idea or question for the podcast? Submit via this form!

SOCIAL: Follow @thehumanupgradepodcast on Instagram and Facebook.

JOIN: Learn directly from Dave Asprey alongside others in a membership group: ourupgradecollective.com.

[00:00:16] Dave: You’re listening to The Human Upgrade with Dave Asprey. Today is an interview that I’ve wanted to do since before I had a show. And this is about the potential cyberpunk future. You guys know I’m a computer hacker. I’m wearing my street cred shirt here. This is a cyberpunk shirt that, Tom Bill, you gave me. 

[00:00:40] Kevin Warwick is a storied figure from my time in computer science and as a computer hacker. And he is probably the world’s first cyborg. They call him Captain Cyborg. Now, the reason they call him that is because he was the first guy to have a chip implanted in his arm the length of a coin so he could open doors and activate lights.

[00:01:04] And we’re going to go into the benefits of this and the risks. So this isn’t one of those, let’s all become cyborg episodes. But if you’re looking at what happens with AI, robotics, biomedical engineering, there’s so much opportunity and risk. And this is a guy who’s famous for it and has been living it for 25 years.

[00:01:30] Emeritus professor at Coventry and Reading Universities. It’s an honor to meet you. I think you were featured in Wired magazine years ago, right? There’s an article on you.

[00:01:41] Kevin: On the cover. Yeah. February 2000. If you look, I looked a bit younger then, which I was, but I have to say, in the academic world, it’s not normal to be on the cover of Wired Magazine. So it’s pretty cool when it happens. You get all academic plaudits, but the Wired Magazine cover tops those, I think.

[00:02:04] Dave: Did you get shamed by other professors for selling out to mass marketing instead of just keeping your academic papers where only seven other researchers read them?

[00:02:13] Kevin: I think you’re quite right as you’re pointing out. I don’t know shame, but it annoys a few academics. So there’s probably one or two academic plaudits that I didn’t get because of the Wired Magazine and things like that.

[00:02:30] Dave: If you’re listening and you’re not familiar with academia, it’s almost like talking to the public about your work makes your work less valuable, which is bizarre, because if you discover something new, that’s meaningful for humanity, I believe you have a moral obligation to stand at the top of the mountain and shout it out because it matters. 

[00:02:53] But in traditional, especially European academia, it’s like, no, you have to be very humble and only whisper it in Latin code. And somehow it’s going to get out there. I think it doesn’t help evolve humanity, but I’m also not an academic.

[00:03:07] Kevin: Yeah. But I really thought it was important to get out there and say what I was doing, and why, and try and give some explanation and open people’s minds because, I think, at the time, first of all, it was technically innovative. And secondly, it was scratching the edges of science fiction as well, which was good fun.

[00:03:29] But so when you’re doing it for real, I think that surprised quite a few people. But it was getting that interaction, I think, with the outside world that I felt was very important. You’re quite right. 

[00:03:42] Particularly in Europe, you’re not supposed to enjoy doing science. You’re not supposed to have some fun, which I always to have some fun doing it. You’re not supposed to get to the outside world and start telling people and appear on television and things like that. I think I did annoy a few academics in how I did things.

[00:04:02] Dave: It’s all right by me anyway. In fact, I think it’s great. And we’ve seen others who’ve been on the show, like David Sinclair in the longevity field, where I’m really active. He went out there and said, we can reverse aging in cells. And some people have gotten mad, and other people said, great. But look, if you can do something magical, then we need to talk about it. And in your case, you did implant something in your arm, and you’re the first person to do that, and that is meaningful.

[00:04:29] Kevin: Yeah, and I think it’s not a normal thing in the academic world. You’re supposed to take a 100 people and do the same experiment on a 100, or 1,000, or whatever it happens to be, and then report statistically on what results you get over the– this was quite a dangerous thing at the time.

[00:04:47] Now, some of it, you can look back and say, perhaps it wasn’t that dangerous, but, at the time, it was dangerous because of the technology we used. When you are the first person to do something, you really are taking a step off the mountaintop. You don’t really know what’s going to be there. You hope you know what’s going to happen. You think you know what’s going to happen. And some things go well. Some things, not so well, but you hope you’re going to be okay at the end of it.

[00:05:18] Dave: You’re a little bit older and wiser now than you were in 1998 when you first had that implant. I was working in the data center business at the time, teaching Internet architecture in Silicon Valley. And I saw that, and I was like, this is really incredible. I was a little bit naive.

[00:05:36] We built the Internet as we know it today. This was a company that had the data center that held Google when it was two guys and two computers, and the Facebook when it was eight computers. Very, very central to the growth of Web 1. 

[00:05:49] And information wants to be free, and Bruce Sterling, and cyberpunk, and the idea that we’re going to democratize information. Over the last 23 years or 25 years since then, I’ve watched corporate interests and governments use it to create a mass surveillance and censorship platform that’s mostly automated.

[00:06:10] This is not the system that I built, and I naively thought that other engineers like me, no one would ever do what the bureaucrats wanted us to do. And I failed to understand that there are some evil people out there. Oh, you want to write a system that automatically shocks people who think the wrong thing? Sure. I’ll write that code for $6. There are people like that out there. Do you still have that implant from 1998?  

[00:06:32] Kevin: No, that implant ended up in the Science Museum in London, to be exact. But I think that’s one thing as a scientist, particularly in the academic world, you live with. And some people say, oh, Albert Einstein, if he knew what his results were going to be used for, he said he wouldn’t have done the work that he did, which is a load of rubbish, really. Of course, he did what he wanted to do. And you know as a scientist, it could be used for good. It could be used for bad.

[00:07:05] And I think all of the implants that I’ve had and been involved with, there are two aspects, and that is good or bad. Maybe medically they can help people in certain instances. I’ve done work with Parkinson’s disease stimulators, which have an obvious way of helping people to overcome some of the problems with the disease.

[00:07:27] But at the same time, you can use exactly the same technology for very negative effects. So I think you just have to live with it and take part in any discussions that are on the ethical side of what you’re doing, whether it’s positive, whether it’s negative. I’m always open to take part in discussions like that.

[00:07:48] There’s not a lot more I can do, I think. I’m a research scientist, and research is taking those steps into, not the unknown, but the little known and trying to push forward the technology and the science that we understand, and hence could be good, could be bad.

[00:08:08] Dave: This’ll be particularly outrageous for people on both sides of the pond here, but I look at research the same way I look at guns. It depends on what you do with it.

[00:08:22] Kevin: Yeah.

[00:08:22] Dave: They’re both useful, and they both could be used for harm. And so this comes down to this weird thing called ethics, and healthy nervous systems, and control systems, particularly on government, that hold people accountable if they do evil things with or without the technology. 

[00:08:40] And I think we have a shortage of accountability in government around the world, where, hey, you’re organizing documents, so you can’t do that. And they’re like, yeah, but you can’t prove I did it, and you can’t catch me, and you can’t enforce it, so therefore it doesn’t count.

[00:08:52] I’m like, you’re a bad person. But we don’t have things to do with bad people. But if they all had implants, and I had control of the implants, I could fix it. So by virtue of that, I should have control of their implants. Does that make sense?

[00:09:05] Kevin: Yeah, yeah, yeah, yeah. But I think the same issues are raised by just about any technological– you take something like the telephone. I’m sure there’s a lot of people still at the time, particularly, this is a very bad thing. All my privacy is going, etc. Other people would say, this is a really good thing.

[00:09:31] I can communicate in a much better way. Certainly, it had an enormous commercial potential, and the number of people that have had jobs– I’ve had jobs in the past in the telecoms industry. That’s how I started after leaving school. So there’s enormous commercial. It changes society completely. So is it good? Is it bad? It’s a bit of both, and it depends what you want to do with it, which you were saying.

[00:09:59] Dave: What happened when you connected your nervous system to the Internet? Explain how that worked.

[00:10:04] Kevin: Yeah. I’m not sure whether you can see that. There’s still some scars on there. That’s where it happened. We used what’s called the BrainGate or part of the BrainGate, which has been used in various paralyzed individuals since that time. It consisted of 100 pins with electrodes on the end of them.

[00:10:31] And what the surgeons did was open up my arm, found the bunch of nerve fibers going down my arm, cut away the myelin sheath that covers the nerve fibers, and then fired these 100 pins into my nerve fibers. And it was like that for just over three months for the experiment. I had wires running further up my arm.

[00:10:58] They came out of my body. There’s all reasons why we didn’t implant everything. So it was like bringing my nervous system out of my body effectively. And we connected myself up to the computer, so I had hard wire or wireless, whichever we preferred to do it, connection between my nervous system and the computer.

[00:11:21] And then so we could do a whole bunch of experiments, both monitoring what was going on on my nervous system with hand movements, and secondly, firing signals into the nervous system. And that latter thing was interesting because there wasn’t an awful lot of work. There’d been work really done on more chicken sciatic nerves, which are not that much like human nerve fibers. And so you’re instantly faced with, what signals should we put into the nervous system? What should the voltage be? What should the current be?

[00:12:00] Dave: Level 1 and level 2 OSI model issues. We don’t know the signaling mechanism or even voltage that indicates what’s going on. 

[00:12:07] Kevin: You’ve got it.

[00:12:08] Dave: From a network engineering perspective, it’s weird because it’s not digital. So you solve those problems, right?

[00:12:15] Kevin: As best we could. Yeah, yeah, yeah, yeah. Literally having to test things out and seeing, how much voltage and current is okay to push into my nervous system before we cause any trouble and things like that? I didn’t tell my wife what I was doing in the day, that we’d turn the voltage up a little bit. So it was more the power. So in the end, microamps were putting in current, but I think it was 50 volts in the end that was being applied to my nervous system.

[00:12:48] Dave: Since that time, things have progressed a little bit. Some of the things that we do at my neuroscience facility, you can get a signal into the brain just by putting electrodes on the skull, tDCS, or TDaCS through the ears. Even with one piece of equipment we don’t use that induces a current using magnetics, you can do very, very carefully targeted parts of the brain. They’re doing this for depression with high amounts of electricity. But small amounts, people have profound mystical experiences without any drugs.

[00:13:23] And we’re nowhere near figuring all of it out, but it’s better than 20, 25 years ago. And we were doing it at the target of those nerves. You were doing at the ends of the nerves, but doesn’t that hurt like hell, peeling your myelin sheath? Myelin sheath degradation is part of Parkinson’s and ALS. It’s nasty stuff. Were you in a lot of pain when you did this?

[00:13:49] Kevin: No. To be honest, no. I had local anaesthetic. It was a two-hour operation, partly because the surgeons didn’t really know what they were doing until they came to do it. It’s one of those things. And once the local anaesthetic had worn off, I never really felt pain.

[00:14:09] When the power was being injected into my nervous system– it depends what you call it– from a scientific point of view, I never regarded it as pain. It was providing my brain with signals, and really, I could recognize the number of pulses and the frequency. Essentially, the whole thing of signaling to me involved pulses, and it depended how many of them came in the space of time. 

[00:14:38] So one example, I was looking at extending the range of senses. And so I connected up ultrasonic sensors, like a bat sense. So I was receiving pulses of current that increased in frequency the closer the object came. So if something’s further away, bing, bing, bing. And if something comes closer, bing, bing, bing, bing, bing, bing, bing.

[00:15:06] And that was pulses that my brain was able to pick up. I wouldn’t have described them as painful. I understood what was happening and could link very quickly. Your brain can link, oh, there’s something coming closer, or something’s further away. Or if you’re moving around, you’re getting close to an object. It was great fun. Great fun.

[00:15:27] Dave: What was that like? Did you get used to having bat powers?

[00:15:32] Kevin: Oh, so much so. We were trying to do it in a standard environment because we had to produce papers on what we were doing. So let’s try it here. Let’s try it there. But one of the researchers, Ian, suddenly brought a board towards me very quickly when I wasn’t expecting it or anything like that.

[00:15:55] It was scary. I thought something was coming for me, and I didn’t know what it was. And it was very much a reactive, moving it, which amazed me that your brain links these signals to what’s going on outside and then responds so quickly. So that was– 

[00:16:15] Dave: That bypassed your prefrontal cortex, which is where you thought you were processing, went straight to your amygdala because it was an emergency situation, which means it really did get built into your brain.

[00:16:26] Kevin: Yeah, yeah, yeah, yeah. Part of the experiment, when I went home at night, I was unplugged as it were, so I just chilled out normally. But every day, we were doing laboratory experiments. So I got used to it very much.

[00:16:46] Dave: When you took it off or turned it off, did you miss it? Did you feel like you were less than?

[00:16:52] Kevin: The answer is yes, but in two ways really. One, because I’m very much involved with the experiments and we’re trying to get as many experiments done as we could while I was all wired up. But I was very much a lab rat, as it were, a guinea pig, whatever way you want to describe it.

[00:17:11] And to be honest, I think for the whole team, because we’d been working flat out for three months, we were absolutely exhausted. So one of the first responses was just, oof, let’s chill a little bit. Let’s just relax, have a break. Rather than, oh, I can’t live without my ultrasonic sense. 

[00:17:32] It’s nice to know that we can have other senses. Didn’t try infrared, but I’m pretty sure infrared would have worked just as well. But it does open up, how far can we take it? What senses might people like? Would an x-ray sense actually do if we could get that to operate in a safe way? Would that work, etc.? So that’s where it becomes pseudo-science fiction, but you’re doing it. It’s science.

[00:18:04] Dave: Were you a fan of William Gibson’s work?

[00:18:08] Kevin: I think I read or watch the films if that’s easier. I still do. Still do. Even time traveling, although I don’t believe that so much. 

[00:18:24] Dave: Yeah.

[00:18:26] Kevin: Yeah, no, clearly, William Gibson was an inspiration. Yeah.

[00:18:31] Dave: Me too. And Bruce Sterling is, I think, my favorite. I’ve been trying to get him on the show since the start of the show, but he’s hard to pin down. If you’re listening to this, you’ve never heard of Bruce Sterling, I think he’s actually the best writer from the second half of the 19th century, just as a writer for any genre, including historical fiction, his writing about the Enlightenment, as well as the creation of the cyberpunk genre, which is strangely predictive of this conversation, even. 

[00:18:58] He has, in some of his series, humanity in the future splitting in two directions. And one group is called the shapers, and these are the people who are changing biology to allow humans to do things that we can’t do today. And then the other group, he calls them makers. And these are people who are adding cybernetic components. And there’s a core philosophical shift that becomes almost two versions of the species. And that’s affected my thinking on the world.

[00:19:29] We should fully max out our existing hardware and use it elegantly into its full capabilities before we start upgrading our hardware. Write better code so you don’t need a new laptop, but get a new laptop when the code is fully maximized. Do you think that there’s room for expanding our existing hardware before we add in, or should we just go straight to adding parts?

[00:19:55] Kevin: I think it’s both. Why not expand the hardware, software, etc.? And that makes it better when you are all linked up. Hopefully, it can work better, and the better the connection. I think, in a sense, it’s perhaps not the hardware that’s the issue. It’s more the connections and understanding the connections.

[00:20:17] You were describing more external signaling. And if we can do it without being invasive, great. But can we get the resolution that we can get with invasive? So for me, it’s not so much maxing out the hardware, but if we can do that, fine. That’s not going to be any problem. It can help whatever. The exciting thing is more the interface. That’s where the problem is, all the interesting bits are for me anyway.

[00:20:48] Dave: The interface is indeed the problem. And I did an experiment right when I was starting the show. I hand soldered this device that sat around my ankle, and it had little cell phone vibrators– there’s eight of them around it– and a digital compass. So it always vibrate North.

[00:21:08] I have zero sense of direction. Some people have one. I do not. And so I’m a visual reckoner. Some people can just unnaturally know which way North is. I do not know how they do it. So I’m going to teach my nervous system that. And yes, I suck at soldering. So after six weeks, it broke, and I never fixed it.

[00:21:24] But for that six-week period, I knew North. And after a while, I stopped feeling the vibration. I just knew North. My body was like, oh, that’s a reliable signal. And it just stopped. And I get in an elevator where the compass didn’t work, and it would go in a little circle. And I was like, whoa, I don’t know which way it is.

[00:21:38] I lost my geomagnetic sense. That was really based on a satellite. And I still think that if I’d have worn that for a year, I probably would have had a sense of direction because my brain would have automatically and unconsciously correlated the new signal with whatever signal we’re using biologically, probably something in our pineal glands, a little magnetic crystal thing.

[00:21:58] Kevin: Yeah. If you found it useful, probably. If it didn’t really make any use of it, it might still have learned it to some extent. But if it found it useful, then it could really have tuned into it. Yeah.

[00:22:11] Dave: So there’s all kinds of ways of getting a signal in, and having done six months of my life with electrodes on my head, doing neurofeedback to have better optimization and control of my own brain, I’ve been fantasizing for years about a tongue printer for feedback. Because the tongue is nerve rich.

[00:22:28] And for blind people, you can have a tongue printer. You can feel individual pricks on your tongue. So you can put something on your tongue and probably control your brain better than you do through your ears.

[00:22:36] Kevin: Yes, no, I had a research student who is actually an undergraduate student that did that experiment and connected up a little array. I can remember him. Ashley, his name was. 

[00:22:50] Dave: Really?

[00:22:51] Kevin: Yeah.

[00:22:52] Dave: That’s awesome. I would try that.

[00:22:56] Kevin: But he connected a little array, put it on the tongue. And it’s very fast. Very, very fast response through the tongue. And you can send little objects, and letters, and all sorts of things that the brain very quickly learns to understand. So quite amazing. But when he first did it, he wasn’t sure again what electric current to apply so that his tongue was okay. 

[00:23:21] And I think the original argument was whether it should be milliamps or microamps. And I suggested using microamps first of all, and seeing how it went. And he came back in next morning to tell me how it was doing. And his words essentially was something like, aah.

[00:23:39] Dave: The downside of the tongue. 

[00:23:43] Kevin: True story. Yes. Watch when you’re experimenting how far you go to start with. Switch on the lower current and work up.

[00:23:51] Dave: The same thing comes with optostimulation. I used the very first infrared brain stimulator, which was handmade by a guy who sold a 100 of them on Yahoo groups in the ’90s. I put it over the language processing part of the brain, and I left it there for a little too long.

[00:24:08] And for the next, about, four hours, I spoke in garbled tongues. At the time, I made my living giving keynote presentations about computer security. I’m like, I just seriously effed my brain beyond belief. And it came back probably stronger than it was before, but it’s not like problems don’t happen, right?

[00:24:27] Kevin: Yeah, yeah, yeah, yeah. Yeah. But you just have to live with them. Yeah.

[00:24:31] Dave: Let’s talk about some of the other problems Since I’ve written big books on mitochondrial function, New York Times science books and stuff like that, I recognize that life is simultaneously, at least within the body, we’re communicating with chemicals, which is the predominant view.

[00:24:50] We’re also communicating with electricity, which we’re figuring out, and with magnetics, which is provable, and now finally with biophotons, which is also measurable, quantifiable. Is not science fiction at all. There’s one photon every 40 seconds from your DNA. So there’s multiple signaling networks within the body.

[00:25:09] And with the longevity venture fund that I’m working with, we’re actually looking at investing in a company that’s using single photons to place signals. We’re hacking the communications network between cells, or cell components, and it’s cool. But what I’m interested in is something called voltage-gated calcium channels. Are you familiar with those?

[00:25:30] Kevin: Yeah, yeah, yeah, yeah, yeah.

[00:25:32] Dave: So I’ve noticed in my own experiments on myself and certainly from reading lots of literature, mostly out of Europe, it seems like certain EMF are not good for cell membranes, particularly because of that voltage. They induce voltage on the cells, more calcium comes into the cell, which causes cell swelling and mitochondrial dysfunction. And it might be the dark side of tech. I think this is like a Neuromancer, to go back to cyberpunk, where there’s the black shakes that everyone gets. Is that Johnny Mnemonic? That was Johnny Mnemonic. Anyway, they get the shakes–

[00:26:11] Kevin: They merge after a while.

[00:26:13] Dave: Right. Are you worried about EMFs with implants?

[00:26:18] Kevin: I always thought of the body more from the electrical point of view. That’s just my background. So it’s a bit of an issue with doctors and medicine, which as you’ve said, is chemical. You have a headache. You take chemicals anyway. Why can’t we do it in terms of the electrical side of things, just apply and do exactly the same thing, but from the electric– all right. 

[00:26:46] I will compromise and agree with maybe electrochemical and electromechanical chemical. We can stretch it because science really has been historically put into little pigeonholes.  I know when I first did at school with valences and things like that, how many electrons are on this? And realizing, doing it in chemistry, it was one thing. And then you do it in physics, and it’s the same thing, but it’s looked at in a completely different way.

[00:27:19] And how you’re looking at the same thing, particularly with the human body, it is a whole mixture, electrochemical, mechanical. It’s all the different things together. And learning what chemical effects an electrical signal has, it’s only through experimentation that you can do that.

[00:27:40] And you’ve got to be a bit wary of it if you’re applying it too much, or too little, or whatever, but if it’s not going to work, if you don’t apply enough, then you have to turn up the volume as it were.

[00:27:50] Dave: If you knew that you had an increased risk of Alzheimer’s or cancer from the work you were doing, would you have done it anyway?

[00:28:02] Kevin: Oh yeah, no question.

[00:28:03] Dave: I don’t expect that. 

[00:28:07] Kevin:  There was a team of four neurosurgeons involved with the experiment, and the main one, Peter Teddy, took me to one side about three days before we actually went ahead with the implant and said, look, this could go wrong. If it doesn’t work very well, you could lose the use of your hand, and you don’t have to go ahead with it.

[00:28:35] And he wanted to be sure that I understood what risks were involved. As it turned out, it probably could have been worse because we were sending signals into my nervous system. Maybe it did affect my brain in a way that makes some problems more likely, and so on. We don’t know that.

[00:28:52] But as a scientist, I wanted to find out. Yes, it could cause something, and I would accept it. I would man up and say, okay, yes, I brought that onto myself doing the experiment. And you can’t be sure. It’s like the Jekyll and Hyde, is a very good science fiction thing. 

[00:29:13] Would Dr. Jekyll have drunk the potion before he became Mr. Hyde if he’d have known what was going to happen? Of course he would. That was the whole point. He wasn’t too sure. He thought one thing might happen. That was part of the experiment. And it’s nice when you’re faced with a Jekyll and Hyde moment yourself. Yeah, of course, you’d go for it. That’s what you’re there for despite the dangers. Yeah.

[00:29:40] Dave: What do you say when someone looks at you and says, but it might not be safe? 

[00:29:47] Kevin: Really, boy, you do get it in the university when you’re about to do something like that, the insurance guys. What’s the university going to be liable for? You’ve got to sign these documents to say you’re not going to suddenly come at them. Almost surely, it’s not going to be safe. I think you take it that it’s not going to be safe anyway.

[00:30:06] You’ve got a decision. Am I going to stay within the bounds of what we already know with the safety involved, or am I going to push the boundaries a little bit? That’s the scientist I am and was. It’s somebody who tries to push the boundaries a little bit.

[00:30:26] Dave: I love that answer. Learning new things isn’t safe.

[00:30:31] Kevin: No, it’s not. It’s not. You try and make it as safe as you can. You try and learn exactly as best you can. But with the nervous system and the brain, there is so much that we do not know. We’ve got a basic understanding of some of it. 

[00:30:48] But even nerve fibers, there’s so much more we need to learn, exactly the things that you were talking about. How much voltage can we apply? How much current can we apply that is safe, and so on and so forth? And you take other parts of the body, and we’re still in a mist of lack of knowledge.

[00:31:15] Dave: I’ve gotten to the point where if someone says, what if it’s not safe? What if it’s stagnant? That’s the opposite. So my new brand of coffee is called Danger Coffee, because who knows what you might do? You have to take a risk. You just take a calculated risk that is safe enough because it’s worth it.

[00:31:36] Kevin: I think that’s it. It’s a calculated risk. Yeah, yeah. But you don’t know. It’s calculated. In a sense, it’s a bit like that. Because you don’t have actual numbers that you can say, this is 20% safe, or 40%, or it’s 50-50 chance of working. You don’t really have that. You think you should be all right from your knowledge, the scientific knowledge, and other people’s experiments, as much as possible. 

[00:32:02] And that’s one thing with the BrainGate experiment that I’m a little bit disappointed to. Although the same implant has been used to help people who are paralyzed and so on, there’s not been a more enhancement set of experiments with the same implant, which, from a scientific point of view, that’s where you get your citations from, and so on. So I’m still waiting. So maybe yourself, or maybe one of the listeners today, if they fancy going for it, get on with it.

[00:32:40] Dave: The idea of human enhancement is something I’ve always believed in. I’ve been taking cognitive enhancers for 25 years. I’ve formulated some. The other thing is I do run electricity over my nervous system, and I’ve done that since, geez, over my brain starting in 1998, actually. I’m using Russian tech that was developed for their space program so you needed less sleep.

[00:33:05] The results are interesting. My nervous system is better myelinated than most humans, which means I have thicker insulation on my nervous system, which means electricity conducts more quickly with less resistance over my nervous system. Are you familiar with P300D, the EEG measure?

[00:33:25] Kevin: Not initially, no.

[00:33:27] Dave: This is a lag time on reality. So for normal humans over 30 or so, there’s about 350 milliseconds of lag time. So if you clap your hands, you think you hear it, but you don’t. Your nervous system gets it. But the first EEG signaling that your brain hearing centers got a signal, that there’s a third of a second that your body decides whether it’s worth showing it to you or not. Decides whether you should be startled. And then it hands you the sound, and it gets slower as you age. I’m still at 240 milliseconds, which is what the average 18-to 20-year-old has. So my response time–

[00:34:05] Kevin: This is you bragging about it at the moment.

[00:34:08] Dave: I am a longevity guy. It’s one of the many measures where my biology is healthier than it was when I was a 300-pound computer hacker. But what I’m saying is that external electrical stimulation can be an enhancement. And what you’re proposing is that internal stimulation with BrainGate could also be an enhancement. So I’m just drawing parallels between those two. And it’s still bragging.

[00:34:31] Kevin: For me, the big one, looking at enhancement, is communication, which, when you consider how we as humans communicate at the moment, it’s really pathetic in terms of what could be possible. At the moment, I have lots of thoughts, and you have lots of thoughts.

[00:34:50] Images, ideas, colors, feelings, all that sort of thing, emotions. And when we’re communicating, we convert them to mechanical signals, pressure waves, or movement, or whatever it happens to be, which is very slow and error prone. And then it gets converted back again for communication over wireless networks, or wires, or whatever.

[00:35:16] So the possibility of connections linking your brain up to the network directly, or your nervous system, if that works, to communicate was always one of my desires to investigate, and beliefs that, yes, we’re going to be able to do that in the future. So future communications will be by thoughts, or brain-to-brain communication.

[00:35:44] It was one thing my wife had for the experiment– two electrodes pushed into her nervous system. And we did send signals, simple signals, like open, close hand, and a Morse code type thing between nervous systems. And that worked fine. And it was great. Interestingly, I felt t quite intimate.

[00:36:08] Probably more than I had imagined when we actually did the experiment. We could feel something between the two of us. Hey, it’s really quite special. But I do believe that in terms of enhancement, for me, the big one, the goal to aim for that would change how we are, to simply regard us as humans, if we could do that, would be very difficult. We’d have something a lot more. Would be communicating directly between brains.

[00:36:39] Dave: There are ways to do that that certain groups have done since the ’80s without having to tap directly in. I’m an external signals guy, but there are ways to get a signal off of a brain and then show it to another brain via existing senses, to the extent that I actually don’t do that with almost anyone at my neuroscience clinic because when you do that, it’s hard to have a firewall.

[00:37:09] Kevin: Yeah, yeah, yeah, yeah.

[00:37:10] Dave: You pick up the other person’s trauma. You pick up their preconceived notions. You pick up their judgments. So if you could do it with an enlightened guru, that would be good for you, but bad for the guru because he’d probably pick up your crap and then have to go meditate for a while or whatever the guru did to become a guru.

[00:37:30] So it’s like unprotected sex. You want a brain condom if you’re going to do brain computer interface with another human. And right now, we don’t have to do it. And if we did know how to do it, as a computer security professional, most of our security systems aren’t that good. I don’t know that I want to bring computer interface because, seriously, Mark Zuckerberg is going to be putting spam in there, or at least listening to what I do, if not the NSA, right?

[00:37:56] Kevin: Back to the negative side, or using the technology for something you didn’t want. I’m thinking in terms of what I was describing. From an academic point of view, let’s do the research. Let’s make this happen because I think it would be fantastic. But, yes, it opens up all of the negatives you could potentially imagine. Yeah.

[00:38:17] Dave: In fact, I think it’s a Bruce Sterling story, one of the characters– which one was this? One of his epic ones. Probably Neuromancer. One of the characters notes that people are going crazy because they got malware in their eye implants, and they’re just seeing ads 24/7, and they would commit suicide.

[00:38:37] Kevin: Oh, yeah. 

[00:38:37] Dave: Because they just couldn’t get away from spam. And I’m like, oh. Yeah, spammers would do that. The same guys have been clogging our inboxes for both your and my entire lives of having inboxes. So I’m hopeful that as we look at this, we look at the risks in a way that I didn’t when I was younger, even when we were creating cloud computing before it had the name. What are we going to do with this?

[00:39:02] And I think all of the tech has ultimately helped the world, but it’s also created a chance, in fact, a probability of a dystopian future, unless we address leadership and transparency issues in society. 

[00:39:15] Have you looked at longevity, using any of your tech to make yourself live longer, have a really sharp brain when you’re 120? Do you play around with those ideas?

[00:39:24] Kevin: In a sense, no. One of the bunch of experiments we did had that in mind, and that was taking brain cells, which we did from a rat, which was easier, and then putting them in a little dish and letting them grow, culturing them, feeding them, keeping them in an incubator.

[00:39:45] The reason I’m saying this– sorry, I jumped ahead a bit– for me, the key for longevity is the brain. I think we can look at all the other components and say, okay, we can come up with an artificial heart if there’s problems, and we can replace this. But the brain is the critical one.

[00:40:06] And if that changes, is it really you that is living longer, or do you become some other creature over time? Can you stay as you, as it were, keep your brain going? So it was looking at what happens when brain cells die off? Can we add brain cells? 

[00:40:27] Which we were doing in a little dish, just to see if this little robot, which had rat brain– and rat brain cells don’t live as long as human brain cells, so could we replace some? Let’s kill them off in this region here. Let’s apply some new ones, but then the rat has lost the knowledge of what it was doing, of avoiding this thing coming towards it, or whatever, which was interesting. So it was to do with what you’re describing, but I’m jumping ahead there and saying the key issue for longevity, for me, is how the brain survives and what mode it survives in. Is it still you?

[00:41:04] Dave: Certainly, if you lose your brain, you’ve lost the game. And the more I’ve studied, the more I realized that 80% of the nerves from the heart go into the brain, and the heart is a sensing organ, much like the eyes. And so you look at cases like Donald Rumsfeld, one of the most celebrated war criminals of the recent while.

[00:41:24] He doesn’t have a heartbeat. He has a mechanical heart that is a continuous flow heart. And I just wonder if that’s connected to some of the crazy ass stuff that he’s done. I don’t know, but what I do know is that the heart-body-mind-gut connection in the brain, there’s a lot going on. We know that proton spin in the brain changes direction every time the heart beats.

[00:41:52] So there’s a quantum entanglement that’s in another signaling mechanism that’s faster than light even, that’s also going on. So it feels like we haven’t cracked the code enough to know that it’s the brain, but it’s a good assumption. And we know that if the brain goes, the rest of it isn’t worth it. But we don’t know that if you only have the brain, it’s going to be enough. I would want to academically pursue that and figure it out. I just don’t want to keep my brain in a jar trying to talk because that might suck.

[00:42:18] Kevin: Yeah. I had, few years ago, a catheter ablation. I had atrial fibrillation going all over the place at different times. And the catheter ablation, which was part of pathway to what you’re talking about. The treatment was going into the heart and zapping electrical pathways in the heart.

[00:42:43] So to stop it fibrillating, I found it absolutely– didn’t really know about it until I had it, which for me, it worked just fine. But the way the heart is operating, electrically rather than mechanically, or chemically, or anything else, was absolutely fascinating for me to find out first hand. First heart, I guess.

[00:43:07] Dave: Yeah, the timing systems– there’s so much interesting cross systems talk in the body that I think with AI, it won’t be that hard to untangle a lot of. 

[00:43:16] Back in 2018, biohacking was added to the Merriam Webster’s dictionary as a new word in the English language. And people call me the father of biohacking because I started the movement and all years before that. And you, in the same year, in a TEDx talk, said that biohacking was an entry level point into cyborgs. Talk to me about what you meant, what you think about that.

[00:43:39] Kevin: I think it’s more of a philosophical thing than a technological. Maybe a bit of both. You can have different concepts of cyborgs. I’m not one who believes that you have somebody who has part technology and part human and it’s got to be a cyborg. Otherwise, just wearing glasses or riding a bike, you’re a cyborg. 

[00:44:02] So I’m looking at a cyborg as maybe a more science fiction type of cyborg that has abilities that humans don’t have that does involve a more semi-permanent or permanent connection between the technology which is integrated into the human, or the human which is integrated into technology.

[00:44:24] So it’s something like that. So that’s how I would see biohacking as being an entry level cyborg. You’re starting to get into it, and you can do some things, and if you have an implant or you have something connected into your body, as you were describing with learning where North is, and so on, and so forth, then that’s how you’re getting in that direction, but in a basic way. But ultimately, it is more the science fiction, the Arnold Schwarzenegger-type version, or whatever it happens to be, or the Neuromancer, and so on.

[00:44:59] Dave: Yeah, the six-million-dollar man. I think it was the first TV show I know that–

[00:45:02] Kevin: Yeah, yeah.

[00:45:03] Dave: Had that of stuff. There’s some interesting questions that come up around what happens with cyborgs because in the biohacking movement– so I started it in 2011, was the first blog post with the definition, and the first conference was in 2013, 2014 maybe. Yeah, 2013.

[00:45:24] So all of that, it’s progressing, and then a group called Grinders came out. And these are mostly people who do tattoos and body piercings, and they started making their own implants. And I thought about getting a magnet on one of my fingers because you can actually sense electrical fields. That’d be really cool.

[00:45:42]  I did start a medical testing company in 2008 that looked at immune rejection of implant materials. And I recognize my body is magnetic, and it just like, I don’t know the unforeseen consequences of having a magnetic finger. I think I’m going to wait on that one. What’s your take on the Grinder movement? Are you part of it, and is that where you see biohacking going?

[00:46:04] Kevin: I don’t really see myself as part of anything like that, but I’m interested in it. There’s a group in Pittsburgh– if any of are you listening in, hi, how are you doing? Some of the things I think they do are highly dangerous, and I think you must be crazy putting that into your body to find out. But that’s their forte.

[00:46:27] But other things, I think, is very interesting. And even getting things to light up into the skin and so on, it looks pretty cool to me perhaps more from an artistic point of view than a scientific point of view some of it. Good luck to them. Power to their elbow, or whatever they’re highlighting.

[00:46:50] Where we’re failing at the moment is for guys like that to pull that somehow into the academic world, because I’m sure they’re getting some really good results that have scientific interest, but we’re not pulling them in there, and that’s not them physically. It’s the results I’m talking about.

[00:47:12] There should be a journal of biohacking, or something like that, so that people can come at that journal, perhaps from an artistic, perhaps from a medical, perhaps from a scientific [Inaudible] background and get results from experiments like that. And it would have to be a more experimental journal and reporting on things like that, but we don’t have that.

[00:47:35] So, no, I’m interested in what they’re doing. Some of them, I think, I’m Facebook friends with, a number of guys down there in Pittsburgh, and other places. But I wouldn’t classify myself as being one of the Grinders or anything like that.

[00:47:52] Dave: You’re adjacent to them.

[00:47:53] Kevin: I just what I do. If they want to make me an honorary Grinder or whatever, then, fine, go for it. Yeah.

[00:48:00] Dave: It’s your new title. Honorary Grinder and Emeritus professor of–

[00:48:04] Kevin: Yes, Emeritus Grinder, I think.

[00:48:07] Dave: There you go. Same thing. Having had enough surgeries and enough medical issues in my life, I don’t really like the idea of having more implants. And as a computer hacker, if there’s a computer in there, it can and will be hacked. It’s inevitable. And you might not like what happens if you can’t get it out. 

[00:48:33] There are cases you’ve probably heard of, a few people who have electronic eyes that replace their eyes. These are people who were blind. And the company that made the eyes went out of business. And so now they have unsupported hardware in their eyes and no way of getting it out.

[00:48:49] And my call would be for lawmakers in whatever country they’re in, and I don’t believe there is a global law, or that there should be, and anyone who tries to do that is probably not your friend. But in each country, there needs to be something that says, if you have implants that you are selling your source code, all of it must be placed in escrow. And if the company goes out of business, it automatically becomes public source.

[00:49:18] This is why it’s called biohacking, not some other word. Because hackers are willing to create Linux, which is what’s driving our conversation today. So if we don’t like it, then Microsoft won’t tell us what’s under the covers. We’ll just make our own operating system. So if I’m going to have any implant ever, full source code access. Nothing hidden in the cloud, nothing that I can’t change, because then I will know if I’ve been hacked.

[00:49:40] And if it’s under those conditions, there’s no way in hell I’m going to do it. And because of the EMF problems, I think I’m going to wait a little while because the evidence is mounting more and more that having, especially Wi-Fi signals inside mitochondrially drenched tissues probably doesn’t lead to good outcomes.

[00:49:59] I think it’s a hackable problem. We can enhance our biology to be resilient to that. We can turn it off. We can actually change the signal so it’s a beneficial signal. A few companies I’m working with do that. But until those are solved, I don’t want stuff in my head or anywhere else in my body, but I want a laser eyeball because it’d be cool.

[00:50:19] Kevin: Yeah, no, I agree with you with the source code and the technology involved in it. Exactly with what you’re saying. There’s got to be some way of having backups, or replacements, or whatever it happens to be. I’ve been involved with surgeons with the deep brain stimulation, which is a commercial product.

[00:50:37] To an extent, they do have some of that, but the companies that are involved, perhaps financially, seem to be pretty sound at the present time. I say that, and that’ll probably put the death knell on them. But it would be an issue, I think. I think it could be a problem, even in that field, if the worst was to happen and there were company problems.

[00:51:00] So it would be good to have exactly what you’re saying. When you’ve got something implanted, when it’s life-dependent, or who you are dependent no, that you need to have some regulations in place to make sure that this can’t happen, or they can’t just close down whatever and the technology’s gone.

[00:51:22] Dave: This is a thing that’s a constitutional amendment-level protection in my home in Canada, charter of rights situations where, number one, the source code’s available, and number two, no one, under any circumstances, even for your own safety, has the right to force you to have any cybernetic enhancement against your will. 

[00:51:45] And given the last three years where people were forced and coerced into having medical treatments, despite what the Nuremberg Code said, and people have different opinions about that, I don’t care what your opinion is. The bottom line is, it is always wrong to force people to do something medical that they don’t want to do, even if there’s a good argument for it, including their own safety, as we talked about before.

[00:52:08] I can see a very dark future where if we could force people to get one injection, we can also force them to get brain implants. That’s not the world that I am creating. In fact, I won’t allow that world to get created. So I’m hopeful that as we move forward with improving ourselves, as we always have throughout all of history, that we’re mindful of this.

[00:52:28] And with that in mind, you have a great track record of being very early, being an innovator, as do I. Tell me your view 25 years from now of what it’s going to look like with cyborgs and humans. 

[00:52:43] Kevin: I think the big one is the issues with AI. How is AI going to be used per se? So this is not answering the question directly that you’ve said, but how far will AI go, and how will it be relied on and used per se, for itself and given control over this, that, and the other? And particularly with networking, it is being networked more and more with a network. Very often, we can’t really tell if this happens, what are the consequences going to be?

[00:53:15] It’s very difficult to work that out. And in a military scenario, if that is linked in with a financial setup, etc., there could be big dangers, as I think some people in the AI field are recognizing. So we’re getting away from the use of AI in a machine that you can switch off and switch on, and you can decide when it’s going to play and when it’s not going to play.

[00:53:39] This is something that is in control of a scenario, and it could be dangerous. I think then there is a need to move to the cyborg setup. The AI is not working alone, but it’s working as part of you, as a network. Then we’re going into a science fiction scenario. Is it iRobot even, or is it the global network– whatever. Is it the matrix that we’re talking about in the future? And would it occur in 25 years’ time?

[00:54:16] I think history is littered by scientists making miscalculations of how long something is going to take or whether it’s going to be, or whether it’s not going to be. Rather than being a cyborg problem, I’m changing the question to being, it depends how quickly and how far AI is going to be integrated in everyday life. At the moment, it’s going very quickly, but–

[00:54:44] Dave: The reason I started laughing like that is in another life, I was the first person to sell anything over the Internet, and it was a t-shirt that said, caffeine, my drug of choice. And it was over Usenet before the browser was created. And I was an entrepreneur magazine when I weighed 300 pounds, and I’m wearing this double extra large t-shirt. 

[00:55:05] I was just trying to pay my tuition. It was a small business. It wasn’t a big thing. E-commerce wasn’t a word. But they interviewed me, and I said, oh, in five years, there’ll be no more need for junk mail. Let’s be able to get all of our communications on email. And that was, I don’t know, 1993. Was that 30 years ago? And I still get crap in the mail, so clearly, I’m one of those people that history is littered with. That’s why you made me laugh.

[00:55:31] Kevin: I’ve got a better story. In ’97, I went to, the World Economic Forum in Switzerland with my wife. The very first day that we were there, we met up with some guy called Jeff Bezos, who had got this–

[00:55:51] Dave: Back in the day. Right.

[00:55:52] Kevin: Website called Amazon, which I’d never really heard of much before. And formerly, as we were part of the system, had to go to dinner with Jeff and his wife, and we were sitting there talking before we mingled with the other people from different companies and that. And so he asked me, what are you doing? Oh, I’m into robotics, and implants, and so on and so forth.

[00:56:21] He seemed to, I don’t want to say excited about it, but interested anyway. I asked him what he’s doing. We’ve got this website, which at the time, was selling books. That’s really all it did. And he asked me, what do you think? Should we expand it? And so my advice was, no, don’t bother doing that. Exactly what you’re saying. I thought, no, it’s not really going to just stick with the books and make things work for that. You’d be safe on that.

[00:56:50] Dave: Do you still go to the World Economic Forum? Was that like a one-time thing, or is that a regular thing?

[00:56:54] Kevin: For me, it was a one-off. It was interesting, but not a regular thing that I had to do.

[00:57:01] Dave: There’s some interesting schisms there because there were two founders of the World Economic Forum. There was Klaus Schwab, and the other guy a dropped out out of disgust because it was being used in a way that he didn’t like. And they’re big proponents of transhumanism, and I think, given their public statements, they’re the people who would happily put implants in you so that they could feed you bugs.

[00:57:27] I don’t think they’re working for the good guys right now, but maybe when they were founded, they did. You never know if someone goes or doesn’t go what their motivations are because if you’re working to fix a wrong, you might hang out with people doing bad stuff.

[00:57:47] So there are people who get really triggered by that idea, and I’m curious about everything including transhumanism. I do not believe that a dystopian future is what we want. And also believe that if I am in control of my own biology, it’s my right to put any hardware I want in my body. And that I’d be really stupid if I put hardware in my body that someone else controlled because that’s a bad idea. Just imagine if the guys who do TikTok’s algorithm were in charge of what’s in your brain. That’s not good.

[00:58:20] Kevin: Yeah, I agree with you 90%. I’m thinking, again, of medical devices whereif you did end up with Parkinson’s disease and you’re faced with a company saying, we can put something in your brain that allows you to live relatively normal life, and it’s going to apply signals when it thinks that signals should be applied, then how much would you trust them? 

[00:58:48] Would you say, no, I want to stay like I am, which could be pretty awful? Or do I want to go for this? Because you trust them that they know what they’re doing, and they’ve tried and tested, and there’s many people before you that seem to be all right with it, and so on and so forth. 

[00:59:05] Dave: All right. I would totally trust them as long as I can either remove the stuff or I had source code access if I couldn’t remove it. That would be the thing. 

[00:59:16] Kevin: Have it removed, or somebody can remove it– 

[00:59:18] Dave: Yeah, it knows that it could be removed safely, versus those eyes that once they’re in, they’re in. And when I look at AI, and cybernetics, and cyborgs in the future, what I see happening is that there will be large numbers of AI systems vying for your attention, and they will be the best at manipulating you better than the best sociopath.

[00:59:41] In fact, it’s called marketing. We’re pretty good at it already. What we don’t have yet is what I like to call cognitive firewalls. So when we do have augmented reality glasses that you’d want to wear, or even just running on your machine, there’s no reason that every web page you see shouldn’t be rewritten according to my rules for AI to only show me what I want. 

[01:00:02] There’s no reason that every YouTube video shouldn’t be automatically translated into two paragraphs of text instead of watching 10 minutes to see some guy with a weird face that’s probably auto-generated anyway. So it’s up to us to arm ourselves with AI that actually is used to filter reality so that we only get the information we want in a way that’s not manipulative.

[01:00:21] And so if you could have that built in, if you do, it’s called subconscious processing, and it’s very energy efficient because what we’re both seeing and sensing right now is probably 0.1% of all the signals coming into our body gets filtered by your body. I need augmented filtering that’s on board or off board to help me ignore stuff that doesn’t matter and allow me to choose where my attention goes.

[01:00:46] And that’s the attention economy that Wired has written about and everything else. I just want to be the one who trumps what all these bad systems are doing so that I don’t get distracted by nonsense and can stay focused on stuff I care about. Does that sound like it might fit in your future?

[01:01:01] Kevin: I’d go for it. Yeah. I’ll sign up for that. Where do I sign? Yeah.

[01:01:04] Dave: All right. Somebody come start a company with me on that. Not like I don’t have enough companies. But yeah, that’s a blending of computer security, cognitive science, and AI. And it’s the equivalent of my old company, Trend Micro.

[01:01:18] Kevin: Yeah.

[01:01:18] Dave: I need antivirus software for my mind right now, not just for my computer.

[01:01:24] Kevin: See, just as well, we didn’t meet up ourselves at the World Economic Forum. Otherwise, if you asked me what I thought for the future of the company, it would have gone nowhere. Got it completely wrong.

[01:01:36] Dave: I’m sure that’s why Jeff asked lots and lots of different people. And Amazon was a big customer of Exodus Communications where I was a co-founder of their consulting business. And it was an amazing time in the ’90s for the expansion. We’re doing that right now with AI, and we’re right at the beginning. It’s 1993 right now for longevity. 

[01:01:54] Kevin: Yes, yes.

[01:01:55] Dave: And the Internet really hit in ’97, is when it just went crazy. And so right now, the longevity business, the number of small companies I’ve talked with who are going to add five, 10, 15 years to human life, oh my God. The next five years is going to be the biggest ever, and it’s going to make the Internet look small. And once we have 200-year lifespans, I’m going to need that cognitive firewall. Otherwise, I’ll spend the whole time watching TikTok.

[01:02:23] Kevin: Yeah. And this is where it becomes very difficult predicting what’s going to be big, which direction is going to go in, and that’s where Jeff Bezos got it right, I guess, as far as Amazon is– and I got it hopelessly wrong. That’s why I’m not extremely loaded and very successful in business, but no problem. Good luck with it yourself. Hopefully, you get the longevity. You do the Bezos as far as longevity is concerned.

[01:02:51] Dave: I don’t need to be that absurdly wealthy, and if I am, I will be using it for the greater good. That’s for sure. Kevin, it’s been an honor to talk with you. I’ve literally known about your work for 23 years, and we finally got to connect. So thanks for being a pioneer. Thanks for being dangerous.

[01:03:08] Kevin: Fantastic. It’s gone so quickly. I didn’t believe it. 

[01:03:12] Dave: It has. And I truly see you as just a pioneer in being human because you’re saying, it was worth it. I’m going to take the risk, and I’m going to see what happens. And you were willing to accept the risk in order to receive the knowledge. And you did, and you shared it, and it’s academically amazing. 

[01:03:34] And for listeners, there is a dark transhumanist future that’s possible. There’s also the possibility of enhancing your ability to show up in the world in a magical way. And I am well aware of the risks. You be should be too.

[01:03:48] Kevin: Because there’s a link in your brain up to a computer, I think, opens up lots of positives, and some of them, the possibility of thinking in more dimensions. I know I talked about communication before, but the possibility of doing that, just that one thing, you can think in maybe just 10 dimensions, whatever, which a computer of course can do. It can process in all sorts of dimensions.

[01:04:15] This is, again, science fiction, I guess, but it can change how we look at the universe around us. Perhaps it will allow us to travel much better than we can think of traveling at the moment. Because we’ve gone almost nowhere in the universe. Just to the moon, which is like an outside toilet for the Earth. There’s hardly anything there.

[01:04:39] The possibility of doing the Star Trek and travelling out into distant galaxies and so on. It’s got to be easier than actually having to almost pedal into space. So I just hope that we can understand the universe around us in a much more complex way if we link our brains up with technology.

[01:04:59] Dave:  Now you got real metaphysical on us. And there are ways to do that with tech. I referenced one a little bit earlier. You can merge with another person with some of the stuff I’ve done, but those are the same things that mystics do in caves, and the old knowledge from Ayurveda, and Tantra, and yoga, and advanced Zen meditation, which I’ve done.

[01:05:23] Yeah, there’s levels that we don’t normally access. I think it was accessing your operating system and seeing all the stuff that the body throws out before it shows you. I do think brain computer interfaces are a way to do that, whether they’re external or internal. 

[01:05:38] And I’ve had profound dissolving into the universe experiences just using electrodes on my head, putting no signal in. Just turning off useless signals in my brain so I could access things. So there are levels where I think we can go. They tend to look a lot like yogic city powers and things like that, but some people are going there with or without tech. It just seems like the tech makes it easier, and it’s probably better than ayahuasca.

[01:06:02] Kevin: Oh, yeah.

[01:06:05] Dave: Wow. I’m so excited to hear your thoughts on all this. And I love it that you talked about perceiving reality in a different way, through enhancements and augmentation. I think it’s possible. In fact, it’s one of the reasons that I believe we need to know what’s going on on the backend, because imagine if that was possible and some developer turned it off because they didn’t think it was useful. I don’t want that too.

[01:06:29] Kevin: Yeah. Completely, which for sure they’re going to. Yeah.

[01:06:33] Dave: Thanks again, Kevin. It has been a profound pleasure and an honor.

[01:06:37] Kevin: Great to meet up with you.

Listen and Subscribe using your favorite podcast provider

You may also like

Start hacking your way to better than standard performance and results.

Receive weekly biohacking tips and tech by becoming a Dave Asprey insider.

By sharing your email, you agree to our Terms of Service and Privacy Policy