307 / April 12, 2025
$0 To $5M In 6 Months: Sharad Sanghi On Why Local AI Clouds Matter NOW | Neon Show
$50M AI Cloud Startup from India
Meet Sharad Sanghi who built India’s first data center, he spent 25 years building Netmagic, India’s largest data center company. He came to India in 1995, worked at VSNL as the country was discovering the internet, and built a company focused on internet for businesses.
Now he is building Neysa, an AI cloud startup, which recently raised $50M to help businesses adopt AI, all from India.
With Neysa businesses can use AI without writing a lot of code or use five different tools to run it. And do everything—train models, test them, deploy them, and monitor them—in one single dashboard.
Sharad has a lot of perspective to share—as someone who was at the forefront while India adopted the internet, and now, the AI wave.
If you’re building in AI, part of an enterprise exploring AI, or simply thinking about where India is in the AI race—this episode is for you.
Watch all other episodes on The Neon Podcast – Neon
Or view it on our YouTube Channel at The Neon Show – YouTube
Siddhartha Ahluwalia 1:19
Hi, this is Siddhartha Ahluwalia, your host at Neon Show and Managing Partner of Neon Fund. I’m today happy to host Mr. Sharad Sanghi, CEO of Neysa. Sharad, so glad to have you on our podcast.
You have been one of the most successful second-time entrepreneurs in India. You previously built Netmagic, which got acquired by NTT Corp. I’m glad to discuss your journey today with our audience.
And the journey, you know, of Netmagic, how are you starting Neysa, right?
Sharad Sanghi 1:51
Thank you so much for having me, Siddhartha. I’ve heard so much about you.
Siddhartha Ahluwalia 1:55
Thanks. Thank you so much. So, to our audience, you know, Neysa is an AI acceleration cloud system provider, which democratizes AI adoption with purpose-built platforms and services for AI native applications.
How would you explain this to laymen?
Sharad Sanghi 2:12
Yeah, so basically, you know, almost all enterprises today, whether it’s an AI native startup or an enterprise, everybody either is already using AI in production or wants to use AI in production. But a lot of them don’t know how to go about it, right? So our job is to make it easier for them to simplify the whole process, accelerate the whole process so that they can start using AI in production faster and make it as cost effective as possible.
So that’s, in a nutshell, what we wanted. That’s our objective, right? So we’ve set up an AI cloud platform in India.
We also have set up an orchestration platform on top so that we can make it easier to allocate and schedule GPU resources. We’ve built a platform on top that allows people to do their entire machine learning journey. If they want to, you know, we have two kinds of users and enterprises.
One is business users. So for them, we give a low-code, no-code platform so that they can build their Gen AI apps easily for machine learning. And a lot of, you know, more mature companies have their own machine learning and data scientists.
So for them, we give an entire MLOps platform that they can use, right, which right from ingesting data, cleaning data to, you know, selecting a model, you know, training a model, fine tuning a model. In fact, before that, even doing feature selection, model selection, model training, model inferencing, and then the entire loop again. So this is something that we built as a platform that we offer.
We also have an MLOps services layer. So we’ve got a team of engineers that basically help customers when they get stuck, right? So very often the application people do not know about the infrastructure.
So we have a bridge layer that helps handle its customers to do that. And last but not the least, we partner with people with domain expertise. So we partnered with people in the banking sector, in the insurance sector, in the call center industry, in the healthcare sector, so that we can provide use cases to clients specific to their domain.
Sure.
Because we can’t do everything ourselves. So that’s where we bring in partners. And with this, what we’re able to do is then it no longer becomes a myth.
So we do it as cost effectively as possible. And also we actually are able to give them an actual use case that they can take in production.
Siddhartha Ahluwalia 4:32
Got it. And how did you conceptualize this, right? Your previous journey was very different from this.
Sharad Sanghi 4:39
So I’ll give you some context. So what happened was in, you know, my last. So I started Netmagic way back in 98.
NTT took majority stake in 2012, but they took the balance stake end of 2017. And then in 2019, I was supposed to leave. But, you know, NTT requested me to stay for some more time.
And my last role at NTT was actually running the global data center business. Towards end of 22, early 23, when chatgpt went viral. So we had built a we had built a cloud platform, non-AI cloud platform, but just a traditional cloud platform, if you will, called Simply Cloud, which was doing really well in India.
Obviously not the scale of Hyperscale, but doing really, really well from an Indian context. We started seeing requests from clients for AI workloads. And we knew that all we needed to do was, you know, add, redesign that platform so that we could add GPUs and, you know, bring in bring on AI workloads to accelerate the client’s journey.
My role in NTT was different at that time. So I tried with a different role to try to get this done. But, you know, obviously that was somebody else’s mandate.
So then I decided to leave and start that on my own. It was, you know, it didn’t I wasn’t competing with NTT, but NTT didn’t build up a cloud platform. They actually decided to partner with Hyperscalers instead.
And so, therefore, I took permission from them, started this. And initially we thought we’d focus because it’s very capital intensive. I thought we thought we’d focus on the software layer, which is the orchestration layer, the platform layer, the observability platform and also the security for AI workloads.
But, you know, then when NTT decided to invest, you know, Nexus, NTT and Z47, all this said, you know, why don’t we also build the software sale in India is not so easy. So why don’t we do an entire comprehensive solution? So we then decided to get in and do the entire cloud platform, which included the capital intensive GPUs.
Right. So that’s how this started. And basically, I started Neysa in July 23.
And we launched our services end of July 24.
Siddhartha Ahluwalia 6:51
OK, got it. And what scale is at Neysa today?
Sharad Sanghi 6:55
Scale in terms of number of employees, 65. In terms of GPUs, we’ve already deployed 1200 GPUs. We’re now looking at expanding that GPU infrastructure.
Siddhartha Ahluwalia 7:04
Got it. And roughly how many enterprise clients you would have today?
Sharad Sanghi 7:08
We have about 15 enterprise clients. We also have some research institutes. We also have some AI native startups.
And we also have we are doing some POCs with some government institutions as well.
Siddhartha Ahluwalia 7:20
Got it. Would love to discuss your previous journey. Right.
You were very early to the Internet, like Internet public access was launched in 1995. But the government didn’t allow private players until 98. Right.
And immediately you started Netmagic in 98. What was the vision back then?
Sharad Sanghi 7:39
OK, so I’ll tell you this. So I was working in the US. So I did my undergrad from IIT Bombay, then did my master’s at Columbia, New York.
From the campus, I was selected to work on the first large backbone of the Internet, called the NSFNET. So I was working as a backbone engineer there. But my aim was always to come back to India, mainly because of family reasons.
I wanted to be with my parents. So I was looking for an opportunity to do something similar in India. So I was constantly tracking what was happening in India.
The moment government announced that they are going to. So till 95, there was no commercial Internet. So government announced commercial Internet in the year 1995.
And August 95 is when commercial Internet started in India through VSNL. That’s when I moved. I moved back into India in August 95.
And I was one of the early users. I didn’t know the guys running it. But through a friend, Suchit Nanda, I managed to, you know, initial launch of the services.
There were a lot of user complaints. Things used to not work, etc. And since I had background in setting up Internet infrastructure, through my friend, I reached out to the VSNL guys.
And I met the gentleman there running was a guy called Neeraj Sonker. He, you know, said, OK, you know, everybody comes to me saying that they know how to do stuff. Here, this is the equipment.
Go and tell me how you can fix it. So I found a lot of issues. I fixed it.
So they gave me a consulting contract. So I worked from 95 to 98 as a consultant, not only to VSNL, but to other institutions as well who wanted large networks to be set up. And we were tracking the, you know, I was tracking when the government, we knew that the government is going to privatize it soon.
And as soon as it got privatized, I set up Netmagic. So what I noticed in 95, in that period, that especially in 98 when it was announced, the most service providers were focusing on retail customers, because at that time, valuations of companies were based on number of subscribers, number of eyeballs and stuff like that. Right.
Even in that entire dotcom boom, it was how many subscribers you have, how many users are accessing your site, etc. Not on actual revenue. So since I got exposed, I also was fortunate to have met the founders of Exodus Communications, who set up the first data center company in the world, which was Exodus.
And, you know, so the aim, my aim was that, look, let everybody focus on consumer Internet. I want to focus on how Internet can be useful for mission critical businesses, how mission critical businesses can leverage the Internet. And that’s how I set up data centers.
I got exposure to that in the US. Exodus was a hot company at that time. And the founders of Exodus, you know, liked what I planned and they decided to angel invest.
And then there were a lot of, you know, VCs that had…
Siddhartha Ahluwalia 10:31
BV Jagadeesh, right?
Sharad Sanghi 10:32
Yeah, BV Jagadeesh and K B Chandrasekhar, but BV Jagadeesh was the lead and even Kanwal Rekhi. And then, you know, and then a lot of these VCs that had backed startups, founder of startup companies that were doing, you know, dotcom stuff. They knew that all the companies needed a data center infrastructure.
And so then some of them approached me and funded us. And so we raised money in early 2000 and we set up our first data center in October 2000.
Siddhartha Ahluwalia 11:04
And how much did you raise back then?
Sharad Sanghi 11:05
That time was only 4 million. First round was 4 million. And then we did subsequent rounds, another 15 million round, another 12 million round.
And we finally got acquired by NTT in 2012. And the way the acquisition worked in NTT was that first set of investors are VCs that owned roughly half the company were bought over. The employees had a significant, almost 18% was an employee that got acquired.
And then balance, you know, I sold very little. I kept bulk of my equity.
Siddhartha Ahluwalia 11:38
You still believed in the company.
Sharad Sanghi 11:39
Yeah, yeah, yeah. So I continued to do that. And so even though the balance stake was acquired in 2017, I ended up staying till 2023.
And had NTT given me permission to set up this AI cloud infrastructure within, I may have not even started this. I would have probably just done it there.
Siddhartha Ahluwalia 12:00
We were fortunate, you know, that happened so that, you know, we have local cloud providers in India, especially in the AI wave when AI is becoming importance of national security and GPUs are becoming importance of national security.
Sharad Sanghi 12:16
Yeah, data sovereignty is very important. Yes.
Siddhartha Ahluwalia 12:18
I think in India we have only three scaled cloud providers now. You are there, Krutrim is there and E2E Networks is there.
Sharad Sanghi 12:25
Yes, yes. Yeah.
Siddhartha Ahluwalia 12:26
I think there’s nobody fourth. It requires so much expertise and so much investment.
Sharad Sanghi 12:29
Correct. Correct.
Siddhartha Ahluwalia 12:30
Like roughly 50 to 100 million dollars.
Sharad Sanghi 12:33
Yeah, there are a few more coming in. There are a few more providers, but yeah, these are the ones that are doing well.
Siddhartha Ahluwalia 12:39
Yeah And it’s a huge setup, right?
It’s not like you can set up with one or two million dollars.
Sharad Sanghi 12:42
No, no, no. So we’ve raised 50 million. We’ve pumped in more than 42 million in our cloud infrastructure.
Siddhartha Ahluwalia 12:48
Wow.
So 42 million is just in the infrastructure till now.
In the capex.
And where are your servers?
Sharad Sanghi 12:57
Servers are in my previous company’s data centers.
Siddhartha Ahluwalia 13:00
Okay. In India?
Sharad Sanghi 13:01
Yeah, in India. All our servers are currently in India, but we do want to take this across the world. So initially we’ve set up in India.
We’ve had some very good success initially. And now our aim is next fiscal to actually start expanding in other countries.
Siddhartha Ahluwalia 13:16
When Hyperscalers are so competitive, right, in owning the AI cloud. So how do you plan to differentiate?
Sharad Sanghi 13:24
Great, great question. Almost every VC asks me this question, but there are multiple ways we differentiate. See, Hyperscalers have, by the term Hyperscaler itself, they have huge scale.
They are cookie cutter. They can’t customize solutions, number one. Number two, so for example, there are many clients today who want private clusters.
You know, for example, they’re okay with a public cloud infrastructure for inferencing, but they may want a private cluster for training, right, or for fine tuning. And so we are able to do both. We can give them private clusters.
Secondly, you know, the Hyperscalers impose a lot of, there are a fair amount of constraints. Now, obviously, if you as an enterprise, if you’ve signed up with their APIs and your development team is used to them, you’ll probably stay there, right. But we give a lot of flexibility in the choice of platform that they can use.
We offer a platform, but there’s an entire range of platform that we offer. We also have our own MLOps team that handholds customers. Most importantly, I think from at least right now, from a pricing perspective, we are much more competitive.
And last but not the least is it’s completely transparent pricing. So there’s a very predictable bill that you get and there are no, there’s a fair amount of different parameters that Hyperscalers use in their billing, which can, which often catches customers by surprise. I don’t think there’s anything that’s not disclosed to clients, but it’s just that customers don’t, are not aware and suddenly they shoot up a large bill.
Whereas in our case, you know, it’s very transparent and they know it’s very predictable billing. So these are some of the differentiators that we see and we see a lot of, especially the fact that we can offer private clusters and public cloud and give a hybrid to clients. That is very, very attractive to clients.
And what are the gross margins that you are targeting right now? So the leader in the space, Coreweave in the US has close to 70% gross margins. I think in India it will be because more price conscious and more competitive.
I think the gross margin will be lower, probably be in the range of somewhere in the range of 40 to 50%.
Siddhartha Ahluwalia 15:30
But I think it’s a great opportunity to scale to like, this is one field where you can scale to a hundred million dollars of revenue very, very quickly.
Sharad Sanghi 15:37
Yeah, it’s yeah, you can scale much faster. It’s a high capital, you know, obviously it requires a lot of capital, but then yeah, the scale out in this sector is much faster. And we’ve seen successes of companies like CoreWeave and Together and Lambda Labs in the US that have done that very successfully.
Siddhartha Ahluwalia 15:54
And what happens is once, you know, this is more of a word of mouth industry. Let’s say if you get one institution to adopt you completely, then related parties.
Sharad Sanghi 16:06
Yeah, word of mouth obviously works like in other businesses also. So this is also one thing where word of mouth works. But of course, you have to deliver, right?
And you have to deliver high uptime. You have to deliver quality of service. It’s a non, let me tell you one thing, it’s a non-trivial, it’s not, you know, we had set up earlier cloud.
I can tell you that the level of complexity with GPUs and with AI infrastructure, given how latency sensitive and how, you know, the number of parameters that you have to worry about is much, much an order of magnitude more complex. You can’t assume anything will just work with anything else. You have to be careful of the firmware, the BIOS, the, you know, microcode that you deploy is, could be different for different kinds of equipment.
And so therefore, you know, there are, so we’ve gone through that learning curve. We’ve gone through all of that. We’ve got now standard reference architecture that we are deploying in our GPU cloud platform.
And now we believe we are in a position that we can now start scaling it across the world.
Siddhartha Ahluwalia 17:10
And what was the scale of Netmagic when the company, you know, got majorly acquired by NTT in 2012?
Sharad Sanghi 17:17
So I don’t remember the revenue numbers, but I know that we had set up data centers in Mumbai, Bangalore, Chennai and Noida. And these were midsize data centers, like, you know, 10, 15,000 square foot, 30,000 square foot. Now in NTT, we have currently close to 20 data centers.
Each of the data centers is approximately 300,000 square foot, between 200 to 300,000 square foot. And, you know, on an average about 30 megawatts per data center. So we have around 350 megawatts in operation, four and a half million square foot deployed.
And, you know, the entire Netmagic business, which included the data center co-location, the network, the managed services and cloud and security is over half a billion dollars.
Siddhartha Ahluwalia 18:11
And are you satisfied with, you know, the efforts that our entrepreneurs are putting in AI right now? Like, are we at par with China and US right now in terms of the competitiveness?
Sharad Sanghi 18:25
I think I’m very impressed by some of the entrepreneurs that I’ve seen in India. They are second to none in that sense. I think when it comes to foundational models, maybe we are a little bit behind, but I think we’ll catch up soon.
Right. So I think, you know, US has already had multiple companies. You know, you know, I don’t need to tell you right from OpenAI to Anthropic to so many others.
Right. In China, obviously DeepSeek has, you know, pretty much shaken the entire world. Right.
And I’m sure, you know, I know I’ve seen some very interesting foundational model startups in India, whether it’s Sarvam in Bangalore or whether it’s BharatGen, you know, led by professor at IIT Bombay and a few others. Right. But so I don’t think we are second to anybody, but I think we are maybe a little bit behind when it comes to foundational models.
But I think we’ll catch up the rate at which we are going. I think we’ll catch up very soon.
Siddhartha Ahluwalia 19:24
Got it. And I think what the advantage we have is as a nation, we have built the highest amount of muscle in IT services. The faster we are able to transition from this IT services to AI services. Right.
And obviously, when you transition to AI services, a large part of it will be done by AI products, human in the loop, along with the product.
Sharad Sanghi 19:45
Absolutely. Absolutely.
Siddhartha Ahluwalia 19:46
So I see there’s a huge opportunity for scale.
Sharad Sanghi 19:49
Yeah, a huge opportunity. I think the only caution I’ll say is that, you know, when it comes to talent, I think we need a little bit more depth in talent because I don’t know what others are seeing, but at least when we interview candidates for every 10 candidates we interview, we are only able to select about one. Because, you know, a lot of the talent that we see doesn’t have the depth that is required to scale.
Siddhartha Ahluwalia 20:12
I think that’s because, you know, R&D as a nation, we haven’t really focused on that. So any PhD researcher in AI that you would see, they would have gone to the US definitely for their Master’s.
Sharad Sanghi 20:26
No, no, but it’s changing. Some of the talent we’ve hired has absolutely been exceptional. It just takes a little longer to get the right person.
But it’s actually, I mean, talent is there. But we need, you know, when we talk about huge numbers, the numbers of, you know, while there is a very large talent pool, the actual talent that can help you scale AI workloads is much, much smaller right now. And that needs to grow.
Siddhartha Ahluwalia 20:52
And what are the biggest security risks that you see facing the AI models today?
Sharad Sanghi 20:56
So there are three main, I mean, there are many risks, of course, you know, security. I’ll focus specifically on AI, right?
So, for example, there is, you know, data poisoning, right? Model poisoning, personal information leaking out, right? So, you know, you use an LLM and you give your Aadhaar card by mistake or whatever, right?
So those are the kind of things that we’re, of course, the traditional security risks on any infrastructure remain. But these are additional risks, some of the additional risks that, you know, people download a model from hugging face. What if that model is compromised?
How do you make sure that the model is not being compromised? The data sets that you get, how do you know that that has not been compromised? So we’ve got a team and we’re branding that as Aegis that is working on precisely addressing these issues.
It’s a team based out of Chennai that is led by Ramesh that are addressing precisely these kind of issues. And we’ll have products in this area, products and services in this area starting due to next year.
Siddhartha Ahluwalia 22:05
And right now, you know, with, let’s say, hallucinations happening, for example, in DeepSeek, you know that when you use their own cloud and you query Tenement Square, you get certain results. Similarly with OpenAI, right? So there are biases, there are hallucinations.
So for core workloads, how can you rely on AI?
Sharad Sanghi 22:31
So, you know, for example, people use RAG to mitigate hallucinations. You give the context of your own corporate data to mitigate some of these issues. And then there are some guidelines on ethical AI to remove biases.
And so you use best practices. Right now, I don’t think in India, I think Niti Aayog and others, Niti are working on some best practices. I think they published a white paper recently.
And I’m not sure there’s any law yet, but I think they are working towards good, best practices that people should follow for, you know, making sure that it’s responsible AI, to remove as many biases as possible and to also, you know, to reduce hallucinations. You know, obviously giving the right context, it becomes important.
Siddhartha Ahluwalia 23:25
And how do our companies you are seeing adopt AI security? Can you describe that?
Sharad Sanghi 23:30
So that’s a very important, you know, so AI security is important for everyone, right? People have done a lot of effort.
They don’t want that to get compromised. You know, the results could be different if the data or model is compromised, for example. You know, we’ve got customers in the banking industry, in the insurance sector, they’re extremely, extremely careful on security.
There’s a whole questionnaire we need to pass before they even allow us to, you know, use, we can deploy infrastructure for them. And so, you know, security has always been important, right? But with AI, given it’s a probabilistic outcome, it becomes even more important.
Not only because of biases and hallucinations, but also because of compromise. And so, and, you know, obviously some of these institutions are governed by RBI, SEBI, etc. And so there are some very strict conditions that they have to adhere to before they can actually use AI in production.
So that’s one of the reasons why, you know, if you look at our path, right, one is the infrastructure itself. Second is the observability. How do you manage and scale the infrastructure?
And the third is the security. We believe security is going to be a very, very critical piece going forward. It’s early days right now.
And so maybe there has not been as much adoption of security services as of now on AI. But it’s only a matter of time. In the valley, there are like at least 50 startups that are just focusing on AI security.
Siddhartha Ahluwalia 25:05
And I think the other important component is larger institutions, right? They haven’t built, like they haven’t hired, they have CIOs, but they don’t have CIOs which have spent their life in AI native CIOs, for example. So how do they evaluate which AI solutions to bring in?
Sharad Sanghi 25:29
How do they bring in AI talent? You will be surprised. Some of the top companies that have been leading banks, institutions, they have separate chief AI officers.
They’ve started actually getting that. See, one must notice that AI itself, machine learning is not new, right? It’s been there for several years.
And in production, people have been using it for, you know, like banks have been using it for fraud detection for more than 8-10 years. So some of the leading banks of India, private banks, have got 300-400 data scientists and machine learning engineers. What is more recent, the last two, two and a half years, is the Gen AI piece, right, where it became viral, right?
And so almost all large banking, finance, and many other industry verticals have AI teams where there’s a lot of data. They’re not only data scientists, but they’re also machine learning engineers, right? So that was, that is, now the CIO, you know, maybe there are two separate functions.
Maybe the, in some cases, the two may be one, but in many cases, they’re two separate functions. Maybe the chief AI officer reports to the chief, to the CIO. But they do have chief AI officers, most of these companies, yeah.
Siddhartha Ahluwalia 26:37
So you are saying, like, the large institutions in India are well-equipped to evaluate the best AI solutions?
Sharad Sanghi 26:43
Absolutely, absolutely. Some of the large institutions are, and they’ve been using it. I mean, it’s just, they’ve been using it on Hyperscalers, they’ve been using it for several years, right?
They’ve been leveraging AI. Now they can use it for a lot more, you know, there are a lot more use cases that they can deploy.
Siddhartha Ahluwalia 26:59
And what do you think about the mid-market?
Sharad Sanghi 27:02
In the mid-market, probably requires a little bit more hand-holding. They may not have as large teams. They may have less a number of people that are capable on machine learning and AI.
But, you know, we’ve seen interest in AI virtually across the board. People, you know, if you look at a recent, I think, was it Gartner or NASSCOM that did a study that said almost 70% people are right now still doing experiments, and very little is in production. But I think this year will be a shift where we’ll see more production AI workloads.
I think last year there was, you know, more of experiments and less production. But I think this year you’ll see a lot more use cases going into production.
Siddhartha Ahluwalia 27:46
Sharad, how would you compare the rapid evolution of AI application infrastructure to the cloud revolution that we saw in 2006? And it took years of maturity for the cloud to happen at scale.
Sharad Sanghi 28:00
Yeah, so I think partly the cloud revolution 2006 probably has assisted the AI revolution to happen much faster. So, yeah, as you rightly said, the cloud revolution in 2005, 2006 took some time before it really took on. People had concerns about multi-tenancy security.
And so it took some time before it caught on. But then it matured and grew really fast. You’ve seen Hyperscalers do really well.
I think the AI wave is really an extension, right, if you think about it. And so once the business model is proven, now there’s no need to have proven the business model. It’s just that people knew that you can, the benefits of cloud of being able to rapidly deploy, scale up, scale down, et cetera, elasticity, et cetera, is all well known.
And people, well, it took some time to adopt initially that. And now you can do this with literally, you know, you can actually put AI workloads, you can actually do high performance computing using this. And so that’s why this time around it’s been much faster.
Also, because GPU infrastructure is much more expensive than just plain CPU servers. In terms of numbers also, it’s much larger. And of course, last but not the least, in terms of data centers have to be redesigned because these are much, much more energy dense data centers for AI workloads.
I remember when we started Netmagic, our average density of rack was like six kilowatts per rack. Then it went to 10 kilowatts per rack, then went to 22 kilowatts per rack. Now it’s at 40 kilowatts per rack and very soon it’ll go to 130 kilowatts per rack.
So that’s the kind of, because of AI, right, the energy density has gone up, the volume. We are expecting data centers are already growing at very high rates, anywhere from 25 to 30 percent as per industry standards. They’re expecting the data center total turnover or total revenue of data centers to literally quadruple in a few years.
So that’s the kind of scale AI has unleashed. Because the sheer things that you can do, not only from a productivity point of view, but actually changing your entire business, making your business more competitive. And so this time around, it has been much faster.
Because this time around, nobody had to worry about, security is important, but people have already solved that to some extent. People have already adjusted to know what are the best practices they need to carry. Of course, there are some new security risks that we discussed, but the adoption that we’re seeing in AI is much, much faster.
There’s access to compute, there’s access to venture capital, there’s access to data center space, there’s access to network. We’ve got high speed submarine cables. And I’m giving the Indian context, but it’s true across the world.
Siddhartha Ahluwalia 31:01
And what do you think in today’s world, the companies which are sitting at application layer in AI, or one level above the middleware in AI, what gives them a core advantage, right, or a MOAT, because what we are seeing with things like Cursor, Vzero, Lovable, you know the time to create and deliver is almost like within days.
Sharad Sanghi 31:30
Absolutely, they have to constantly evolve, they have to constantly create, they have to constantly innovate, otherwise somebody else will, you know, come and disrupt them. So that’s happening, but I think a lot of these companies have been doing that and therefore have been succeeding. They also have the early move advantage and so they have, see the thing is, they have access to, if they have access to customers early, they have access to data early, they have access to customer data early, so they can build faster, right?
And so you have to be very nimble to this thing. I think vertical AI is something that will succeed because, you know, while there will be some applications that cut across, but the, you know, specifically, you know, targeting a particular, so we came across this company of our, a partner of ours called Data Science Wizards that does, you know, use cases for insurance sector. So they built a whole, they were an entire platform that is customized, which is a general purpose platform where they build out, but they’ve decided to focus on one domain to begin with, which is insurance sector.
They’ve built more than 200 agents for the insurance sector. So literally every single task that an insurance company can do, they can do. And so that’s why they’ve become, you know, very, very useful for insurance.
They’ve had a lot of success in the insurance sector. So I think if you can, while there is, I’m not saying that you can’t build something that cuts across multiple sectors, but, you know, I think you’ll see more and more people focusing on a particular domain and building more and more agents for that domain. And that is something that I think will do well in the next few years.
Siddhartha Ahluwalia 33:08
What we as Neon Fund believe and where we are investing is, we believe in today’s world, most of the large models are trained on public data, web, right? So if you have large sets of private data with you, if you are in healthcare or insurance or banking, and which you have built over a period of time, which are not accessible. Yeah, that’s very valuable.
That’s just very valuable, right? For especially if you’re building vertical applications.
Sharad Sanghi 33:39
Correct.
Absolutely. Absolutely. And so there’s, you know, NTTVC as a portfolio company that focuses on healthcare data.
They’ve got partnerships with, it’s called nference. They’ve got partnerships with Mayo Clinic, they’ve got a partnership with Duke that focus on, you know, they’ve got access to this really, really valuable data. And so they’ve built a lot of AI applications on top.
Yeah.
Siddhartha Ahluwalia 34:06
I think it’s now the, at least the creation of use cases, solving workflows. People are doing it in a matter of days.
Sharad Sanghi 34:14
Absolutely. It’s absolutely. I mean, people, this company built close to two, three hundred agents in DSW built in a matter of a month.
Siddhartha Ahluwalia 34:23
And if you can remember in Netmagic, right, how much time after starting, let’s say, after launching a first data center, it took you to reach one mil revenue, 10 mil revenue, and then subsequently 50 and other milestones.
Sharad Sanghi 34:35
I don’t really remember those numbers. I think we reached, I think we reached one million in the first, because dot com was booming at that time. So we reached one million very fast.
Okay. I think within a year, we crossed much more than one million. Here also, we’ve done that in six months.
What we did in maybe two years there, we’ve probably done in six months here. So I don’t recall the years, but we built that very gradually. I think this will be a faster build out in terms of numbers.
But see, that is a very, you know, the beauty of that business was in data centers, once a client comes in, it’s very difficult for the client to leave, right. And whereas on AI platforms, it’s easier for clients to migrate.
Siddhartha Ahluwalia 35:22
But still, if you’re operating at an infra layer, the switching costs are pretty high, let’s say.
Sharad Sanghi 35:27
Yes, it is high. I mean, that’s why, you know, once you are with one of the hyperscale, it’s very difficult to move. But obviously, there are companies that are, you know, experts are just doing migration.
So whereas physical infrastructure, migration is even tougher, right. Virtual infrastructure is easier than moving physical infrastructure. So yeah, so both businesses are sticky businesses.
Both businesses, if you do a good job in execution, clients will stay with you.
Siddhartha Ahluwalia 35:52
And what are some of your learnings on product and GTM that you are applying this time also?
Sharad Sanghi 35:57
Yeah, so I can’t tell you the importance of product managers earlier on, right. We even, that’s one thing good we did in Netmagic. Every single offering was, we had product managers for Colo, for managed services, for security services, for network services, right.
We had separate product managers that they went through the proper rigor of product management, which where they had product specifications, which said what the service would do, would not do proper, you know, what, how do we compare with competition, etc. You know, so the product managers were basically the P&L owners of their respective businesses and also assisted sales as to how they could sell, etc. And so that’s the same thing we’ve done here also.
So we have product managers for all our portfolio, the three portfolio services that we have, which is our cloud platform, our observability platform, as well as our, so that’s Velocis, Aegis, as well as Overwatch, which is our observability platform. So and what is the second thing you said, product management and GTM. So GTM was both direct and indirect, right.
We did that in Netmagic. In Netmagic we started initially with just direct sales and very soon realized that we had to partner. And so we maybe did a little later on, but we had the direct sales had, you know, we had our own team across the country and then we had partnerships, which were of two kinds.
One was channels and one was, you know, more strategic business development alliances. We used to call them alliances there, where it could be the Big four, it could be the Hyperscalers, etc., where we were partnering with so that we could work together. Here we have different kinds of partners, but we’ve done the same thing here.
So we’ve got a direct GTM across the country and we also have now partnerships with both, with companies like Data Science Visas, like Relab, like others, where we go to the market jointly. They sell their service to a particular vertical and we bundle in our infrastructure. So we’ve done a similar GTM here as well.
Siddhartha Ahluwalia 38:03
Got it. And what are some of your learnings from scale that you think that will be very useful for you this time? Like when you hit scale in that journey?
Sharad Sanghi 38:12
Yeah, so one of the unlearnings now I remember, you know, I got used to, at the end of Netmagic 25 years, that I had, we had enough people doing in legal and finance and in, you know, in operations and everything. We had enough teams. So if you wanted to start something, it was very easy here, you have to build everything from scratch.
So I do unlearn that mindset of, you know, that you roll up your sleeves and do everything again. That is something that we did. That is something we had to do at Neysa.
Just repeat your question.
Siddhartha Ahluwalia 38:46
So I’m asking some of the learnings on scaling that you have.
Sharad Sanghi 38:51
Yeah. So on scaling, you know, one of the biggest learnings was, you know, you can’t wait, you have to take a gamble. I can’t wait for, like in the data center business, I can’t say that, look, I’ll open a data center once I get business.
Siddhartha Ahluwalia 39:05
You get demand.
Sharad Sanghi 39:06
Yeah. I can’t wait for demand to come before I open it up.
So, so that is one thing we built data centers in Bombay in COVID, for example, we took, I bought, you know, almost a hundred acres of property across the country where we couldn’t go to work. Right. I was just going and buying property.
Right. And we started building these campuses and all of them are full. Right.
So, so if you, so you can be smart about it. So, for example, in data centers, what we did was we built, we bought the land, land doesn’t depreciate. We built in each campus, we built one core and shell.
And then when the demand came, we fitted it out. Right. So you were aggressive, but you didn’t say, okay, only when I get demand, will I acquire land.
Siddhartha Ahluwalia 39:49
But that would have taken hundreds of millions of investments.
Sharad Sanghi 39:52
Yeah. So in Netmagic, we’ve invested more than $2 billion after NTT’s acquisition, a few billion dollars have been invested in there, almost four, 500 million a year is what we’re investing in Netmagic now.
Whereas in, in Neysa also, similarly, we didn’t start small, as I said, out of the first 52 million, 84%, 85% has been put in infrastructure. Right. Because you can’t wait, you can’t say, okay, you know, look, there’s a six week lead time for GPUs.
Right. You want to H100, four to six weeks, it takes time. It used to take six months, now it takes four to six weeks.
Now of course the new Blackwell series takes six months. But the thing is that you have to take that plunge. You are, obviously you have to be, you have to be within your means, you can’t overstretch, you can’t overcommit, but as long as you have the funds, you, you go all out.
You can’t say, okay, once I get a client, I will then start building. Similarly, when we go now expanding to other years, we will have to take a little bit of leap of faith. Right.
When we expand in other countries. Right. Obviously we may need one anchor before we do it, but I can’t say that, okay, only when I have this much business will I come, because the rate, this business, the way this works is when somebody needs an AI workload, they need it immediately.
It’s not that they’re going to wait for you for six weeks for you to set up something and then they’ll come to you. In data centers, at least customers give you enough, you know, some notice. So when a Hyperscaler, for example, wants a captive data center build, they’ll give you 18 months to build it.
Right. It’s not, or if they want a flow, they’ll give you six months a headstart. Here you don’t get that time.
So you have to, so the commonality between the two is you can’t wait for business to come before you will invest in CapEx. Of course, there’s a risk because the rate of obsolescence of GPUs is so fast, you know, NVIDIA is keeping on coming out with new GPUs, other companies also now catching up. And so, you know, if you have a very large inventory, you could be in trouble.
Right. So you have to, you have to kind of figure out the balance of how much you should invest so that you have enough room to scale and at the same time not invest so much that, you know, you don’t, you know, your GPUs are not utilized and they become obsolete.
Siddhartha Ahluwalia 42:08
So one thing that we are seeing with DeepSeek is the cost of inferencing has crashed down as compared to OpenAi. And that’s why, you know, we are seeing OpenAi build out different applications also so that they launch their AiSDR. And we don’t know which are the other hundred applications that they are working on because they know that if this crashes down the cost of inferencing, then, you know, they would have to have some MOATS, which… And then obviously the kind of investment folks like OpenAI have done, which are on tunes of billions of dollars, the Chinese have done it on a fraction of the cost and people will gravitate wherever the cost is low, provided some stability.
Sharad Sanghi 42:56
I agree. I mean, so, you know, some people, you know, there will be some, you know, there’ll be some USPs that, for example, OpenAI will still have. And for people seeking those kind of benefits will probably go to.
So in some benchmarks, DeepSeek is equal or better, but in a lot of benchmarks OpenAI is still better. Because OpenAi also gives you this, they’ve got the team of people that, you know, work towards reducing biases, etc. So they’ve done a lot of, they’ve added a lot of MOATS.
So I would not discount OpenAi despite DeepSeek. But your point is that, you know, cost of inferencing is coming down. The way, from our perspective, look, it’s still a very attractive, inference as a service for us, right, is going to still be a very attractive market.
Because when you say cost per token and people use millions of tokens, it is still a pretty attractive business. Yes, I think cost coming down suddenly at DeepSeek is good for our business because it democratizes AI and that’s our mission, right, is to democratize AI. The other thing that’s happening is even the GPU, as new and new GPU models come out from NVIDIA and others, cost of GPUs will also start, you know, for the same teraflops, cost per teraflop will come down, which is good for the industry, right, there’ll be more adoption.
So I think it’s positive, you know, for the industry.
Siddhartha Ahluwalia 44:25
It’s almost following the Moore’s law in semiconductor, right?
Sharad Sanghi 44:27
Yes, yes, absolutely.
Maybe even faster. But it’s a good thing because then more people will adopt and there’ll be more people willing to try things out.
Siddhartha Ahluwalia 44:37
So you saw the dot-com boom, the dot-com bust, right? So how would you compare this AI revolution in terms of both boom and in terms of what is hype and what can go bust?
Sharad Sanghi 44:49
So look, the benefits of AI are real. There is, like with every new technology, there’s going to be hype, right? So people will think that AI can do everything and obviously that’s not true, right?
Although, you know, a lot of the top stalwarts are saying that we’ll reach AGI in the next couple of years, etc. But there will still be, in my opinion, there will be still things that you would not necessarily need to use AI for. So there is hype.
But if anybody thinks that AI is not real, I mean, AI applications cannot make a difference in the real world, then they are missing the boat, right? We’ve seen enough use cases in production of AI where it has made a huge impact, not only to the common man in terms of, you know, the way they’ve used ChatGPT, but in mission critical business environments, right, where AI is making a huge difference. In the insurance sector we’ve seen it, in the banking sector we’ve seen it, in the healthcare sector we’ve seen it, you know, so it’s only…
So I think from an enterprise perspective, if they can, you know, outline their business priorities, first spend time on what their business priorities are, and then figure out how AI can, you know, help not only in terms of new ways of doing business, but also in terms of productivity improvements, then it, you know, it makes sense. But you know, you can’t just say, okay, I’m doing an AI pilot without knowing what your business priorities and business objectives are, right? So I think that’s where sometimes it becomes a hype, where people just want to try new things out and without really thinking of how they’re going to apply it in their business.
Siddhartha Ahluwalia 46:26
But do you think, like some people are saying, the AI revolution or the AI cycle is 100x bigger than what happened between 98 and 2000 on the internet boom?
Sharad Sanghi 46:42
I wouldn’t know that this is 100x, but I think it is much larger.
Siddhartha Ahluwalia 46:46
But you would say…
Sharad Sanghi 46:47
In terms of investments, if you look at it in terms of pure investment, it’s much larger. I mean, you look at it the way Nvidia stock has gone up, right? So it’s from pure investment perspective, AI is much, much larger.
Siddhartha Ahluwalia 47:00
But could you predict it sitting a few years back?
Sharad Sanghi 47:03
No, I don’t think, I would be lying if I said I could predict this, right? I doubt anybody predicted the way ChatGPT went viral and the way this thing, yeah, and the way Nvidia skyrocketed, I don’t think anybody predicted that. And obviously Jensen believed in it, because he kept at it for 17 years before he saw upside.
But I don’t think most common people could not have predicted that.
Siddhartha Ahluwalia 47:27
I remember in 2015, Nvidia was just a chip maker for gaming use cases.
Sharad Sanghi 47:37
Absolutely.
Siddhartha Ahluwalia 47:38
And then they rode on multiple ways.
Sharad Sanghi 47:40
That’s why GPU stands for graphic processor unit, right? For rendering, they’ve done it. But then the beauty is that they were able to apply it for AI distributed workloads.
Siddhartha Ahluwalia 47:51
Yeah. And everybody had ignored the GPU market for the longest period of time because it was such a niche use case.
Sharad Sanghi 47:57
Absolutely, absolutely.
No, there was still, you know, still the de facto choice for graphics and rendering. But nobody expected that it would take off this way.
Siddhartha Ahluwalia 48:08
Yeah. So it’s been 25 years for you, you know, and more than that, since you started your first company. What are the changes that you have seen, let’s say, across every decade in the Indian startup ecosystem?
Sharad Sanghi 48:26
I think a lot more venture capital, right? At various stages, whether it’s seed stage, pre-seed stage, seed stage, you know, series A, series B, etc. So a lot more venture capital because there have been successes now.
There have been multiple IPOs. There have been multiple exits. So people have seen the success, right?
So a lot more startups. I think, you know, a lot of founders, if you look at some of the IITs, etc., there are incubation cells in each of these universities, which is encouraging startups with some money, which was never there earlier. I mean, it started much later.
You know, I’m talking about when I started Netmagic, right? So there were a few VCs at that time. Now there are like 100 VCs, you know, available.
So a lot more mentorship, right? Because there have been successful founders who are now spending time with some of these startups, even involved in certain funds. So overall, I mean, I think even the government has, you know, encouraged startups.
There was a, I think Angel Tax was a disaster, but fortunately the government realized it and fixed it. So I think across the board, you know, the value of the startup ecosystem has been realized. And I think people, you know, I used to remember that hiring talent was tough in Netmagic because nobody wanted to join a startup at that time, right?
Now we are getting people from top companies willing to leave their jobs and say, can we join a startup? Because they say, they know that it’s not only great learning, but it’s also a great wealth creation opportunity, right? So I think it’s been a huge change in the last 25 years.
Siddhartha Ahluwalia 50:08
And if, you know, if I ask you, like, how have you evolved as a founder, as a person in your journey?
Sharad Sanghi 50:15
So I’ve had to learn a lot. You know, I did, although I had an engineering background, but I had to still, you know, I did my, almost a year before I started Neysa, did courses in deep learning specialization by Andrew Ng on Coursera and a few others, right? MLOps specializations, did courses on NVIDIA, et cetera.
So did a lot of work myself to prepare for this. But in terms of, so to get, you know, and I learn every day, right? The number of things that, so for me, it’s been a great journey from a learning perspective.
I think one of the, in the last couple of years at Netmagic NTT, I think my learning has stopped, which is why Neysa is so exciting for me. But other than that, you know, I think I’ve become a lot more calmer. I used to get worried a lot more.
If I look at the early days of Netmagic, wasn’t sure that, you know, okay, how long we’ll survive, et cetera. I think now I have a lot more faith, a lot more belief, and a lot more, that experience kind of matures you in terms of willing to take any kind of, you know, we know there’ll be ups and downs in any journey, and the ability to withstand it is better. I think from a work-life balance also, I think it’s become much better now than it was earlier.
And so I think all positive changes.
Siddhartha Ahluwalia 51:42
Thank you so much, Sharad. It’s been a real privilege to have you on the podcast.
Sharad Sanghi 51:48
It’s been my pleasure. So glad to meet you, learn from you, and we’ll stay in touch.
Siddhartha Ahluwalia 51:54
Thank you so much.