My guest today is Jay Ruparel, co-founder and CEO of VOICEplug AI, a Voice-AI company empowering restaurants to leverage AI and automate food ordering using natural language voice ordering at drive-thrus, over the phone, websites, and mobile apps. VOICEplug's technology integrates with existing systems and apps, allowing customers to interact with the restaurant using natural voice commands, in multiple languages and be serviced seamlessly.
I wanted to sit down with Jay to unpack what he has learned about how conversations are structured (for computer-to-human interaction) that he brings into his CEO (human-to-human) conversations - crucial conversations, with his senior leadership team and his broader organization - does an AI-savvy conversation-aware CEO approach conversations and interactions with a different eye?
We also focused on a few questions of deep concern for our culture today: the responsible and ethical use of AI and how it might impact the future of work.
Through our conversation, it became clear that:
AI is great for:
Repetitive or highly similar and constrained tasks. Ordering fast food at a drive-in, VOICEplug’s use case, is a perfect context for AI. In these kinds of conversations, there are boundaries on the scope of the interaction and a clear set of intents and possible goals.
Jay also points out that his AI is trained on many, many different instances of people ordering food from other people. So, the voice-driven bot can get better and better at these kinds of conversations, all the time.
Humans are best for:
High-risk and high-complexity conversations with no clear comparables or no clear scope. For Jay’s conversations with key industry stakeholders, at company-all-hands, and with his leadership team, AI can give him ideas or first drafts, but ultimately, he needs to navigate nuance with his human conversational intelligence.
++++++++++++
AI is great for:
Crunching lots of data (which is always from the past) and summarizing it.
Humans are best for:
Deciding what kind of future they want to create.
Jay points out in the opening quote that the Human mind can think, reflect, envision and CHOOSE an ideal future, creatively. AI can do a lot of that…but it can’t choose the future it wants. That is still a uniquely human strength - to dream and to choose to create that dream.
Jay dreams of a future where work is a deeper and deeper collaboration between humans and AI, where humans focus on higher-value activities while AI takes over repetitive tasks.
Jay goes on to suggest that curiosity and powerful questions are THE most critical of human skills.
When I asked Jay to share his favorite ways of designing conversations, he shared three tips:
Take just a few minutes before a meeting to be very clear about your key one or two objectives for the conversation. In other words, start the end in mind. Another way of putting it is to take time to set an intention. You might enjoy my conversation with Leah Smart, the host of one of LinkedIn’s top podcasts, on just this idea.
If Jay is meeting with folks he doesn’t know as well, from outside the company, like new clients or stakeholders, he’ll deliberately slow down the conversation and delay getting to the core objective. Instead, he’ll spend 20-30% of the meeting time getting to know them, talking about other things, all in service of trying to understand them as people, and their conversational style
Jay consciously chooses some conversational areas to NOT be highly scalable or automated - he shares a story about being offered an AI tool that would send automated and personalized birthday emails to his employees. As he says
“What is the point of me having to use that as the CEO (when)…that relationship, that wishing someone on their birthday as a personalized conversation means so much to me. That's the last thing I would want to ever automate.”
Not all conversations, even ones that can seem small and inconsequential SHOULD be automated. It is possible that a real, human touch will be the ultimate in luxury in the future.
Links, Quotes, Notes, and Resources
Min 38: The Importance of asking the right questions in an AI-driven world
“In this new AI driven world, the answers are out there, but the questions are really important. And that's why this whole subject of prompt engineering, it's going to get more and more important. In the organizational context also, we have kind of given up a little bit on curiosity as a human trait. somehow that has been waning and now has to come back to the fore, basically, getting answers is easy. That's the way I think the future is going to be. You can ask a bunch of people and you'll get answers, but what questions to ask, how to ask them.
What context to keep in mind, how to make the questions, have the right language, the right conversation style. This is very, very important.”
AI Summary and Key Moments by Grain
Jay started VOICEplug to improve human experiences with technology, specifically in the food ordering industry, by using AI to understand human language (0:32)
Jay and Daniel discuss the difference between augmented and artificial intelligence, with Jay emphasizing that AI should augment human capabilities rather than replace them (2:42)
Jay acknowledges the limitations of AI in envisioning the future in isolation of the past, and emphasizes the importance of human creativity and envisioning capability (7:52)
Jay discusses designing conversations to enable humans to have better conversations rather than just replacing them with bots, including using voice biometrics to personalize interactions at a drive through. (20:29)
Jay shares an example of choosing not to automate personalized birthday wishes and discusses his approach to designing conversations as a CEO, including starting with clear objectives and considering the context and participants (25:04)
Jay notes that AI is past-oriented while visualizing an ideal future is the job of humans (35:11)
Daniel and Jay discuss the positive and negative aspects of AI learning from existing workforce and pushing everyone to reinvent the way they deliver value, and the future of work involving a mix of humans and AI bots working together (47:47)
Full AI-generated Transcript
Daniel Stillman 00:00
I'll record here and then I'll officially welcome you to the Conversation Factory. We're live. Jay, thank you so much for making the time to have a deep dive conversation with me. I really appreciate it.
Jay Ruparel 00:14
Well, thank you for having me. I'm really excited being a part of this podcast.
Daniel Stillman 00:18
Thank you. So can we put VOICEplug in context and if you could talk a little little bit about why this problem is important to you and why you started the company.
Jay Ruparel 00:32
Yeah, so I have been in the broadly, you can say, data science, analytics, AI space for over a decade and I've always been looking for what AI can do to really impact human experiences with technology. And my previous company, we were doing some work around AI in retail and often it was about the technology has moved so fast. But there is a whole segment of people that is struggling to learn and understand how to best use technology, how to have the right sort of interactions, conversations with technology. And so really what Brock got me thinking about this problem is how can technology do the hard work of understanding the human language rather than humans having to understand how do I deal with technology? And so we really took that as the basis and said, if you look at industries today, what are some of the big problems that we see in human computing interactions that we can solve through AI? And that's where we built initially a minimum viable product for voice based ordering and also voice based shopping and took it out to the customers, got their feedback, we got overwhelming feedback, positive experience that customers had when we showed them the voice ordering feature. And we felt that there was a real need for that with just the kind of labor issues that existed in the hospitality industry and the not so pleasant experience that people have had when ordering food, whether it's at the drive through or over the phone. And that's where the journey of voiceplug started, is basically with the idea that how can we make the human experiences with food ordering using AI significantly better than what it has been?
Daniel Stillman 02:42
So on our last conversation, we talked a little bit about augmented versus artificial intelligence. And it seems like your technology sits at a layer where the human eventually can interact with a human, but there's technology that's mediating a layer and maybe eliminating the human at one level, but in other instances making their lives on the other side of the technology easier and more efficient too. Instead of a direct human to human interaction with a phone in the middle, there's a bot in the middle that's helping smooth over the interaction. Can you talk a little bit about your view of augmented versus artificial intelligence?
Jay Ruparel 03:27
Of course. So most of us think artificial intelligence is basically what would replace natural intelligence or human intelligence. And so a lot of the activities or jobs are thought to be. If AI comes saying the fear psychosis that prevails about AI taking over jobs is because it is thought of as something that replaces humans. Right. Whereas whether it is our solutions and a lot of companies are approaching it with this angle is to say how can we really augment current human capabilities by using AI to do tasks that can be automated, that are routine, that don't necessarily need the best of human creativity. Really dividing the set of activities to what can be best done by automation and AI and what activities still need the human creativity, the human compassion. So for example, when it comes to dealing with guests in the kind of hospitality industry experience that we have had, the compassion with which the staff can deal with guests in a lot of different scenarios, the creativity involved in solving specific problems, however good an AI solution you create, cannot replace that. But there are things that are automated that are done day in and day out. Like for example, I want to just order the same thing at the drive through every time I go there. You don't want necessarily human stop dealing with that. The AI can remember the customer say, would you like to repeat your last order or your favorite order? Yes. And then within 30 seconds they are through because the customer is looking for quick throughput efficiency at that time. So really augmented intelligence is really augmenting the human capabilities, human intelligence to really deliver the best output for the end customer.
Daniel Stillman 05:48
Yeah. Where do you feel like the limits are? You talked a little bit about where sort of a voice activated bot shouldn't be used. Where else do you feel like the limits are versus where it's best deployed?
Jay Ruparel 06:04
Yeah, well, actually, I think if you really look at the bots are best used where good quality and relevant data is available. Because the whole premise of AI is that you need to feed patterns and data sets for the AI to really learn from it and then be able to interpret real world scenarios by using that learning to extrapolate results. Right.
Jay Ruparel 06:46
Where there is a steady set of data, trends, patterns that can be extrapolated, that's where the bots can best be used. But where there is a fair degree of uncertainty, ambiguity, the level of abstraction of the problem is higher, that's where AI bots would struggle. So clearly that's where you can kind of separate the capabilities of the bots from the human to say if it's a new scenario, if it's a completely new initiative that has no relevant data that you can extrapolate from AI is not going to be useful there. Maybe once it evolves to a stage where you have operationalized it, now you can have AI step in and take over that activity.
Daniel Stillman 07:49
Yeah, we talked about this a little bit in our last conversation. The idea that AI is taking data and trying to extrapolate or learn from that, but it being fundamentally past oriented or past based. There is a limit there. And so I'm wondering how you think about being clear on that limitation for yourself and in the application of this technology.
Jay Ruparel 08:19
Yeah, the beautiful thing about the human mind is that we can think what we are thinking and we can reflect on what we are thinking and that allows us to really not be limited to future envisioning creativity based on past reflections. The bot has this limitations or AI in general has this limitation that it cannot really think about the future in isolation of the past. And that's what is fascinating for me in terms of when understanding human computing interactions is that if there is a bot to bot conversation, that bot to bot conversation. And a human to bot conversation is differentiated by the fact that the bot to bot is all based on just reflection of what they have dealt with in the past. And the moment something new comes up, they can't really envision a new feature out of it. Whereas with humans it's different. So I think that capability that exists, it's something that when a lot of people say will AI rule humans? And this whole we hear about all this Sci-Fi kind of themes about AI taking over the world, I think it's not going to happen for this very reason. I think the way we can think about the future
Jay Ruparel 10:08
is clearly something which is a very powerful capability that only we have and with however, sophistication whatever sophistication you use to build the AI, it will always be limited and can never match that human envisioning capability.
Daniel Stillman 10:30
It's so interesting and I think it's really profound. This is something that I've thought about shameless plug for my own book when I was thinking about this, as we all have been for years. It's like it can free us up to be uniquely human, right? That automation and pulling some of these things off of our plate can free us up to do the things that we are uniquely good at, which is to dream, to be creative. What's interesting, when you were talking about being data and learning focused and more transactional, I definitely know people where they get stuck in that type of conversation where we're looking at the data and we're trying to sort of like look at the near term versus we are dreaming, whereas we're being creative, we're letting conversations flow. And I'm wondering how you and your own work with your team make space for these really deeply human conversations because I think it's actually very easy to get pulled back into maybe a little bit more of a mechanical approach. Even though we're not mechanical, I think we can get pulled into, well, where's the data, where's the analytics? What's next? What's next versus thinking three, five years out? So I'm wondering how you thread that needle because I don't think it's an easy needle to thread.
Jay Ruparel 11:56
Yeah, I think you brought up a fascinating point. It's something that we are constantly thinking about at the workplace is where do you strike the right balance between how much to use AI for your own work? So starting with, for example, we have engineers and we now have AI based coding tools that can basically generate a piece of code, right? And so we have discussed with our engineering team and we have had some debates about should we use some of that capability that the AI throws up for either validating a core or for generating ideas
Jay Ruparel 12:46
not just restricted to the engineering team, even to the marketing team to say, hey, I want to write a blog. Can I use Chat GPT or other generative AI to basically help build that? For example, when someone told me this that you want to give a speech, we have a town hall that we do every month and someone said you want to talk about this? He said I use Chat GPD to just basically write my whole speech that I give to the employees. And I thought, well, let me try that. And across all these different examples, what we have realized is that you can use generative AI, you can use any such AI tool for giving you some ideas to explore with. You could generate some code and look at is this maybe a good way to solve this particular design problem I'm dealing with? Or I can say, okay, I want to talk to employees about this theme. Can you generate help me with some ideas? And because it deals with large data sets, right, it is about someone that is assisting you, scanning the whole range of different articles and books and coming up with themes for you, right? So that's a great help. But then being able to decide what is right or not appropriate for that context, how to use some part of it, what part to use, that is where really I think we need to differentiate ourselves and say that's where the human capabilities come up. So in terms of that human connect. So when I'm actually talking to my employees in a town hall, I would never use a speech that is generated by Chat GPD because it would be cold to me because it's not my own. It may be a great speech written, but it's not my own. So for me, because the way I connect with my employees and I connect with how I can personalize every bit of what I want to say is not something that can ever be replicated with an AI completely. So that is one part of it. The other is just human to human conversations, like how we are talking now and we have discussions over zoom or in person because they're remote employees that we have, we are always looking for how to have conversations beyond using technology tools because in our day to day life. We are already doing too much of it. Right? So we had this classic case of and this happened in my previous organization. Someone had a loss in the family and everyone across the organization was sending like condolence messages whereas they were just on different floors. I mean they could have walked down. Right? And that caught us. We said this cannot be happening, we can't allow for this. I think it's important for organizations and we are consciously working on that is to say how can we have that human to human element still kept alive? And there are so many benefits of that. Having those short conversations near at the coffee station or just chitchat before and after the meeting and the kind of connections you develop is not something that you can do with a bot or using any technology or tools.
Daniel Stillman 17:02
No. So this goes to one of the conversations I wanted us to have, which is around conversational intelligence because we give these bots a certain level of conversational intelligence based on them trying to learn what it is that we do in normal conversations. And then there's us as human beings being really intentional about continuing to expand our own capabilities to connect and to develop relationships and to learn and create meaning with other people. So I'm wondering, I learned a lot in my own research around designing human conversations by seeing how conversational theorists and technologists were trying to understand and model human conversation. So I'm curious if understanding human conversation from a technical perspective has changed how you think about building good human conversations.
Jay Ruparel 18:05
Yeah, no, that's a great question. I would say one of the things we do and that kind of is differentiating us as a company in building these AI solutions is that we record live conversations between humans and the staff at, let's say the drive through and then use those live audio to actually train the AI. Yes, because there are others that for example use synthetic data where you basically imagine the conversations that people will have in ordering food versus us using these live conversations. It helps us really understand the whole variety of ways that people talk for just one use case which is ordering food. Right? So the AI learns from that conversation and builds on that. What is interesting though is that when humans are doing these conversations and they know that there is a bot at the other side, they basically adjust their expectations because now they know that I'm not talking to a real person, I'm talking to a machine. And so the way when we are going to the bank and going to the cashier versus when we go to the ATM, our expectations are very different in terms of what we would ask, what we would do and not do. Similarly, the human expectations are reset just by whether they're talking to a bot or another human. And that kind of changes the whole conversation style, the level of the relationship, the level of compassion, all those elements are completely different. And so when we started our learning from this has been a couple of things. One that there are certain types of conversations that we would never want to replace. There are conversations that are important to happen between a human and human and you shouldn't want to use technology there because although it may give some benefits in the short run in terms of cost savings or efficiency, over the long run, it is actually a deterioration of the whole human to human relationship. Yeah, the other thing is that even in terms of how we design these conversations, we are now looking at can we actually enable humans to just have a better conversation rather than we were looking at the problem as saying how can we replace some of the automated conversations by plugging in the bot? Now we are saying that there are situations where we want the humans to continue having the conversations, but how can we use that augmented intelligence approach where we actually give the humans enough tools so that they can now converse more intelligently? So for example, using voice biometrics you can identify if the person there is a repeat customer at the drive through. Now the AI can use that and say welcome John and would you like to repeat your order? But if we don't want to do that and continue with the human interaction, how about if the drive through staff gets the same intelligence to say hey, this is a repeat customer. By the way, he ordered XYZ last time. And so start with personalizing say hey John, would you like to order the same burger and fries that you ordered last Friday? And what a great human experience that would be. So really looking at this angle of assisting or augmenting the human capability in doing the right kind of conversation, that's.
Daniel Stillman 22:48
Very interesting because you're basically sort of like giving people potentially a field of data. We've talked about this in the past. I use a tool like Grain that transcribes my coaching conversations and summarizes them. And I have a document that I share with my clients where it's there. It's extremely valuable to me to be able to see that high level summary anytime I want to. It's also really interesting to think about this idea of transactional versus I wouldn't say non transactional conversations, maybe more relational conversations. Because I remember when I was first starting to study conversation design from a technical perspective, google talked about this idea of the collaborative theory of conversations, that we each come to a conversation and we try to exchange meaning and act or transact. Based on that conversation, I reached out to you and said hey, would you like to be on this podcast? And we had a conversation about like, well, what does that mean and can I do that? And what is daniel looking for and what am I looking for? And then now here we are. We're acting and transacting based. It's relational, but it's still, on one level, an action that we want an action to happen at the end of a conversation. And one of the things I heard you talk about was the human conversations that we have in the meat space or in relationship to other people inside of the company. It's very easy to get caught in transactionality versus having a conversation in service of the other person, in service of their growth, in terms of their development. And so being really clear on what the goal of a conversation is, is really important because you're right, when we're going into a drive in, we're talking about ordering a sandwich, we're in our car. It is a transactional conversation. But I think it's very hard to remind. I certainly have a hard time remembering that there's value in slowing down. There's value in not having a transaction and not even having a point to the conversation. Like small talk.
Jay Ruparel 25:04
Yeah, absolutely. I'll give you another example. Two weeks back, our HR brought up this someone from our HR team basically said came to me saying, hey, by the way, there is this tool that everyone has started using. What the tool does is that you can create a template of happy birthday emails and you just upload all the employee data to it. And then it would basically just send out a personalized email. And then you could actually personalize it to a great extent to say. But I said, what is the point of me, for example, if I have to use that as the CEO, as someone who for me, that relationship, that wishing someone on their birthday as a personalized conversation means so much to me. That's the last thing I would want to ever automate and have my Bot. I would rather not do it rather than have
Jay Ruparel 26:08
like you said, the drive through could be very transactional, but this is not something I want it to be ever transactional. Right. It is that one day in the year where you really get the opportunity to wish the person you don't want to automate that.
Daniel Stillman 26:23
I think that's beautiful because this brings in the perspective that I see it as. I mean, everything is a spectrum, and in this sense, there's over design or underdesign, but that's just all a perception. I often quote my mother when she was reading my book. She was like, Dangle, I don't always want to design my conversations. And I said, mom, that's a choice. Choosing to not design the conversation means that you're saying, like, I want to do it myself in my own way. That is a design. I don't want it automated. I don't want it scripted. I want it to be new every time. And I'm curious how you think about the structures that you do like that you do use as a CEO to manage all the complex conversations that you have. Because it's not just I presume winging it every day with your leadership team, with your other key stakeholders. I'm curious how you think about what is the minimum viable structure, what Jay's conversational intelligence guidelines are for, how you think about designing some of those conversations.
Jay Ruparel 27:39
Yeah, this may be like maybe a secret recipe that you're asking me to share, but I'm happy to do that in the sense that and this is something I've learned from some of my mentors as well. One of the things I do is even for specific meetings that I have, I would basically plan just taking a few minutes in advance to think through what is it that I want to convey.
Jay Ruparel 28:21
The objective of the conversation is something that I start with to say and I try to keep it to as minimum as possible and having that piece of communication made very clear. So, for example, if I'm going to have a 30 minutes meeting with a group of four or five members, but my core objective from that meeting is to communicate that one or two things, right? I will basically either write it down or phrase that in my mind to say these are the two most important things I want to convey through the meeting. And then based on the context and the nature of participants, I would then basically decide the way I would like to convey that. Sometimes it could be direct, sometimes it could be through some examples or citing some data to say why that needs to be done. So it starts with the objective, it factors in the context and the type of participants, and then it comes to how it needs to be delivered. And in this, one of the things that I pay special emphasis to is when I'm looking at the nature of participants, I'm not only looking at for each individual because sometimes you're dealing with people that you may not know so much about, right? When it is internal teams, usually I understand what type of communication style will lead to the right impact that I want. When it comes to others, clients, other stakeholders whom I don't know that much, I usually take the first few minutes, maybe even the first 20%, 30% of the meeting, to really understand them. So there will be discussions that we will have which are not really related directly to the topic, but which will just help me understand what is the effective way for me to communicate. But in my mind, I'm clear that I'm going to discover that in the meeting and then use that style which will create the right impact, that little bit of homework. So one of the things this is something I learned from a mentor is I have on my calendar actually scheduled, I have what I called a me time, which is in the morning. So that me time is really from eight to 09:00 A.m., I don't set any other meetings because my calendar is blocked for that me time. And that is the time when, for all my meetings through the day, I'm doing this objective setting. What do I want to achieve, how do I do it? All of that is being planned and other activities are being planned as well.
Daniel Stillman 31:40
It's funny, you answered so many questions. There's so many layers there. And I want to peel a little bit of that onion because blocking out that me time is so important, like, absolutely essential, because if you don't have that preparation time, you're going in a lot more blind unclear about what's going on. And I imagine you've seen the impact of the me time, and so you do not. What would it take for you to skimp on your me time? Would you? Have you I mean, I'm sure we.
Jay Ruparel 32:17
All have, but I try not to because it's a big deal. Now, I have seen the benefits of that. I would not do it unless it is like something which is super urgent, unexpected, coming up. Yeah, that's the sacred time I have for myself. I will never compromise on that.
Daniel Stillman 32:40
And that's to really look at your day and say for each of these conversations, what is my goal or objective, what do I know about the context and what do I feel like? Is my tailored approach? Is it direct? Is it more example or data driven? Do you do that in a sort of structured way or do you feel like it's a little bit more intuitive now that you've been doing it for some time?
Jay Ruparel 33:07
Yeah, I started doing it in a very structured fashion. So I would actually look at every meeting and then I would actually make bullet points in my notes app to say, and I would have that in front of me on my iPad when I'm in the meeting. So I used to do it that way, but it has become less of structure, more now it's more intuitive, but I need to think through that process. And one of the things that I've also learned is that and this might sound
Jay Ruparel 33:49
it might sound bit OD. It sounded OD to me when someone told me this. And I started practicing. I actually look at, I imagine, the impact that I want to get out of the meeting and I just think about it for like 15, 30 seconds. For example, if the impact of the meeting that I want to have is we are behind schedule, let's make sure that we put in all our efforts to get this done by end of the week. Right. And that's what I want the team to agree to. I would basically imagine that situation where someone is saying that at the conclusion of the meeting. And that kind of allows me to also reframe some of the way I was thinking about that meeting. Right. To achieve that. Once you visualize the end result, you have a better understanding of how it will play out and then it just gives you that energy to then make it happen.
Daniel Stillman 35:11
And this goes back to what we were talking about, about AI being past oriented. Visualizing an ideal future is your job. Yeah. I don't think anybody else can or anything else can replace that job for you.
Jay Ruparel 35:31
Absolutely. Yeah. And because that is so individual to you, the way you visualize an outcome is so individual to you. There is no one prompting you like you would do to a Chat GPT of what that outcome has to be. Absolutely. I think you've connected the dots very well. In fact, there was a fascinating article on is ChatGPT a Reflection of Humans? Or something around that title. And basically what it threw some very interesting observations about. Ultimately, when you are interacting with an AI or a ChatGPT, it is basically reflecting on
Jay Ruparel 36:22
what prompts you are giving. Right. A lot of times, yes, you get some bits and pieces of knowledge there, but ultimately your prompts really dictate what you get out of it and also what you interpret out of what you've got. Right. So it's really your own reflection. Right. And some people like that reflection, some people don't like that reflection. But it's like a mirror. I think the article was, is it a mirror to the human soul? And I thought it was a fascinating article.
Daniel Stillman 36:55
Well, you bring up a really valid point, which is like the questions we ask dictate the quality of the answers we get.
Jay Ruparel 37:02
Right.
Daniel Stillman 37:02
I'm curious how you think about because that's an aspect of conversational intelligence. I think often we think about conversational intelligence as what I say. It's also how I listen, but it's also how we ask. And I'm curious in terms of trying to get the best out of your team, which is, I think such a fundamental role of the CEO, presumably everyone reports to you and they are better because of it. I'm curious how you think about how you ask, how you elicit and how you reflect because you're not a bot, but we're still doing those same components that a bot is doing. What's the operating system of your CEO bot when you're doing those CEO level conversations?
Jay Ruparel 37:52
Yeah. I'll give you a real example. Just last week we had our meeting of the product team and one of the things we asked everyone to do as a pre work was describe our products in five sentences. But each of those has to be questions. So don't say our product does this ask a certain question and that should describe the product. And the reason why this is important is that, like you said, it is all about what questions you ask. In this new AI driven world, the answers are out there, but the questions are really important. And that's why this whole subject. Of prompt engineering as they're calling it, it's going to get more and more important. So I think in the organizational context also, the way I feel is that we have kind of given up a little bit on curiosity as a human trait. And I think humans were far more curious earlier than and I think somehow that kind of has been waning and it kind of now has to come back to the fore, which is basically getting answers is easy. That's the way I think the future is going to be. You can ask a bunch of people and you'll get answers, but what questions to ask, how to ask them, what context to keep in mind, how to make the questions have the right language, the right conversation style. This is very important. That's what we want to and we're doing like we did this in this product meeting, want to create ways that we can facilitate employees to really ask questions. And even in meetings it's not necessary to end with answers, but just to end a meeting with lot of floating questions to say, oh, we spend 30 minutes and we came up with fascinating eight questions. That's a great outcome of a meeting, right? It doesn't matter whether you had the answers or not, but we ask great questions and that will create a very healthy organization in terms of the quality of interactions you can have.
Daniel Stillman 40:31
I think that's a very interesting value to apply to conversations because somebody could easily say at the end of 30 minutes, I want to have a clear decision or a yes or a no or a next action. And that's one way to design a meeting which is just a group conversation. But it's really interesting to hear you talk about valuing, getting to a small set of interesting questions at the end of that conversation because that will fuel the next valuable conversation, presumably.
Jay Ruparel 41:11
Yeah, absolutely. I completely agree with that and I think the value that we will start seeing in humans and as we employ professionals is going to be in that is going to be even in interviews when we are interviewing people, it was always about asking questions and seeing who has the smartest answers. I think we are already seeing that change and it will be seen even more is what are the questions that you're getting from the candidate and are those interesting, relevant, meaningful questions? Right, yeah,
Jay Ruparel 41:58
I'm very excited about how I think the true human capabilities in terms of creativity, curiosity, compassion, ability to envision the future, all of this could be augmented even better with what's happening with AI.
Daniel Stillman 42:23
It's so interesting, and I'm really glad we covered a lot of what I was hoping to from, and I said, this is one of the reasons why I wanted to have you on is that conversational intelligence is such an important topic, and you're one of the sort of few people, I think, that can speak to it from both sides of it. Technical and emulation of simulation of conversational intelligence and also making choices and optimizing for human conversational intelligence. And so I feel like we've covered a lot of what I was hoping we should cover. But I'm curious if there's anything we have not talked about that you think we should have touched on because we still have a couple of moments.
Jay Ruparel 43:07
Yeah, I think we have covered some really good points. I think I do want to mention about. One thing that is going to be interesting as it plays over the next few years is how we understand the ethics governing some of these AI based conversations and interactions. Because the regulation has still to catch up and right now it's kind of freewheeling for all the technology companies and they're using AI the way they want. But there are some concerns around to what extent you should use AI and where do you cross the boundary in terms of ethics and privacy and other compliances.
Daniel Stillman 44:05
It's a huge topic. I mean, how should we even be framing the conversation? How should we be designing that conversation about ethics? Like you said, law is always going to be trailing well behind technology. How are you thinking about it.
Jay Ruparel 44:27
Obviously, when it comes to human to human conversation, because there is a human at the other end that has norms around privacy and ethics, you are automatically framing conversations, being careful with the bot and the kind of access to data that they have. I think it is about really reimagining what those boundaries could be. And we had a couple of instances where conversations came up about can we understand the ethnicity of the person of the customer and customize the offering around that now? Well, the AI can maybe do that, but is it right to do that? Those are the kind of conversations so I think there's a lot that needs to be done in terms of just understanding the implications of AI. And I think the term that I keep hearing about is responsible. AI is how can you act responsibly towards the society, the environment and just make sure that while the technology has a lot to offer, we tread on that path carefully without violating anyone's privacy and any ethical norms.
Daniel Stillman 46:06
Basically, privacy is a really interesting aspect of the conversation and also how it's implemented and potential discrimination is an interesting lens. There's another one that I'm curious how you think about the AI supply chain, because you mentioned and I've read about this before, we're training the AI on humans who then potentially don't do that job anymore or do that job less. And I don't think you necessarily have control over how a company brings this into their brand architecture and into their employment habits. But I remember reading a story about how, like, you know, we've got a sales team of 100 and we basically train the customer service bot on the top 10% and then the bottom third of the sales force then becomes much more effective. The top 10% is still great. They don't really need the AI that they've trained. They aren't getting bonuses anymore because of performance. If that's how they were getting compensated, the compensation structure changes. And now the whole rest of the company is doing better because of the people who were doing the best in the first round. I see these the AI is learning from us often and then what happens? And I think that's a part of the ethics that people don't often think about. I'm wondering how you noodle around that.
Jay Ruparel 47:47
Yeah, no, I think it's a really good point because we often think about AI in terms of replacing specific activities or jobs or taking that over, but in terms of just being able to learn from existing workforce and then being able to do that and pushing the envelope really for everyone in the workforce. So it could be obviously there are positive aspects to it in the sense that you could actually have a lot of complicity that may be existing in certain parts of the organization now. Everyone has to kind of reinvent the way they deliver value because the same thing that they were doing is probably some of those or most of those can be done by the AI. So now you need to look at higher value added activities. But then the other side is also how you can as an organization become there's always this challenge about knowledge and expertise residing in individuals versus the organization. An organization is always trying to look at what are the ways to extract that knowledge and expertise so that the organization becomes less dependent on specific individual talent. I think there are both sides to this, but there's a lot of work being the people involved in broad the future of work and how you'll be working. Instead of having human workers in a typical office, you'll be having think of it as almost some humans, some AI bots and you're all working together to achieve the task. I clearly can see that that's the future that will be emerging.
Daniel Stillman 49:59
I think if we're going to talk about ethical AI, then it's a broader conversation well outside of our scope today in terms of what ethical capitalism looks like. Because in an individual company, you're absolutely right. It's optimized to learn as much as possible as an organization and to not rely on the knowledge or expertise of one individual because the goal is to deliver reliably a service or experience regardless of that person is sick or leaves someplace else. So that incentive structure is there. So I think this is a much bigger conversation. I'm glad we touched on some of this. It's an ongoing conversation I think everyone needs to be having.
Jay Ruparel 50:48
Absolutely.
Daniel Stillman 50:49
But I know we're at time. Time goes so fast. Jay, where should people go to learn more about all things Jay and all things Voiceplug. Where should we point them to on the Internet if they want to stay in touch or stay part of the conversation with you guys.
Jay Ruparel 51:08
Yeah. Voiceplug is at voiceplug. AI. And then you can always search Voiceplug on LinkedIn and Twitter. We have some good posts that we do blogs on LinkedIn. And then of course, you can reach me on LinkedIn, just J Ruparel and I'll be happy to connect if there are any questions, any thoughts, comments. Happy to do some follow up conversations as well.
Daniel Stillman 51:38
Thank you so much. And I'll put links to both of those in the show notes. I am so grateful we had this conversation. It was very far ranging. I'm glad we touched on some thorny topics and I really appreciate your openness and your honesty.
Jay Ruparel 51:53
Oh, I really enjoyed the conversation. I think you asked some really nice questions. You brought in some great points. I didn't know where time flew by. So this is a great conversation. Appreciate you having me on the show.
Daniel Stillman 52:07
Thank you so much, brother. Well, the feeling's mutual. We will call scene. Thank you.
Jay Ruparel 52:14
Thank you.