지금 Current 2022: The Next Generation of Kafka Summit에 등록해서 데이터 스트리밍의 미래를 라이브로 확인해 보세요!
When Sanjana Kaundinya chose Confluent for her first job out of college, she was eager to learn as much as possible—and in the two years since, that’s exactly what she’s done. Below, Sanjana explains the projects she’s worked on, the challenges she’s faced, and the knowledge she’s gained working side by side with some of the most senior members of the team.
I’m a software engineer on Confluent’s Apache Kafka® Global team, which is part of the Event Streaming organization. Our team builds distributed systems to handle multi-geo-replication for Kafka—our charter includes Cluster Linking, Replicator, and Multi-Region Clusters. This ensures that data is replicated across multiple Kafka clusters, which is critical for a variety of our enterprise customers, especially in the financial sector for regulatory purposes. A lot of that work is self-contained, but we also collaborate with the Metadata and Kafka Engine teams, which are within the Event Streaming organization, and with the Cloud Native and Storage organizations, which reside in the greater Kafka organization. For something like backup and restore, for example, we can work closely with Storage to see how Cluster Linking can help.
Day to day, I spend about half of my time helping our customers be successful. I may get pulled into an incident to lend some expertise or follow up on customer escalations, or to fix bugs or improve our internal monitoring. The other half of my time goes to coding and designing, which looks different depending on the time of year. When we’re close to a new release, I’ll be testing and fixing bugs. Otherwise, it’s more getting the design work ready, building consensus, and then writing those design docs and one-pagers—as well as code reviews and of course, writing my own code.
I first heard about Confluent when I was in college, desperately applying for jobs like every other senior. A friend saw the company at a tech job fair and suggested I look into them. Working with Kafka sounded compelling. I wasn’t very familiar with distributed systems back then, beyond what I’d learned in my courses and through an internship at LinkedIn, but the concept intrigued me. It was also exciting to see that one of the co-founders was a South Asian woman, like me. I applied for one of their new grad positions, and I got an email within a couple of days inviting me to interview.
I had a few conversations with the engineers, which were great, and then after they made me an offer, they flew me out to our former office in Palo Alto. I liked that—instead of using a visit just to hype up the company, I already knew they were serious. And I really liked the vibe; everyone I met was so approachable. I got to talk with super-senior people, and you never would have guessed they were a co-founder or one of the first employees. Everyone is so smart and they’ve accomplished so much, but they’re very unassuming and humble. That appealed to me. It’s not necessarily true at every company, and I think the people you spend time with early in your career can shape how the rest of it will go.
I joined as a software engineer on the Replicator team and spent about six months there. Replicator is an external data system that copies data from between two Kafka clusters, allowing customers to create a globally replicated data system. All but one of us were fairly new to the company, and it amazed me how well we worked together. It really felt like that “one team” mentality, which is one of Confluent’s values. Then in January 2020, I moved to a cluster linking team to help build out Confluent Cloud. We’re solving the same problem as we were with Replicator, but pulling data directly from the source using the destination Kafka cluster rather than utilizing Replicator’s external system to do the same. It was a drastic shift for me at first, because I went from working with other new employees to a team that included a lot of early Confluent engineers. Some of these people had been working at the company since 2015 and are committers to Apache Kafka! It’s been an amazing experience, too, though; I’m so thrilled and humbled that I get to learn from them.
Very recently, we went general availability (GA) with Cluster Linking both in Confluent Cloud and with Confluent Platform 7.0. It’s been exciting, seeing that go to market. And now, I’m working on making the Cluster Linking metadata store more efficient, so we can make the product even more performant.
I was just talking with one of my mentees about this—in college and even as an intern, it’s much more prescriptive. Some classes give you space to design, but mostly, you’re told what to do. Because of that, you learn a lot about how to solve today’s problem and less about how to solve tomorrow’s problems. So I’ve learned a lot since I joined about how to code in a way that’s easily extendable down the line. I’ve also had to learn a ton about testing; unfortunately that tends to be very under-taught in universities. I needed to figure out how mocking works, how to test functionality, and just all the different types of testing—unit, integration, system, regression, performance, sub. Some of that was on my own, but I also had colleagues here to help.
Something else that’s different in the real world versus a classroom is that you’re working on an evolving code base, and you need context to understand why something was built the way it was. The work we do here is complex, so it can take a lot of digging to understand that intent. Sometimes I’m going down a GitHub rabbit hole. It’s challenging, but it’s also super interesting. And I think talking with senior engineers who know all that history has given me a much greater appreciation for the whole software life cycle—I’m better equipped to think through the opposing forces you have to navigate whenever you’re trying to get something to market. No product is ever totally finished. You define the requirements for an MVP, and then with each iteration, you make it more stable and more mature.
Of course, there’s also learning in terms of picking up new languages, which is something you need to do anywhere you go. When I joined, for example, I knew Java well but I didn’t have any Scala experience. I did learn on my own—we have a LinkedIn Learning subscription, and Google and Stack Overflow are my best friends—but a lot of my teammates helped me out, too. What I appreciate most are the more granular comments, because that’s the stuff I’d never know on my own. I love it when someone takes the time to point out a cleaner, better way, because I think the point of Scala is making things as elegant and polished as possible.
Building consensus is something I’m constantly learning. It’s such an interesting challenge because there are always reasons behind every different perspective—I might be trying to optimize for one thing and end up completely de-optimizing something else. I’ve found it helps to frame the conversation in terms of the implications for the other person. I’ll say, “Okay, this would work for us, but how would that affect your team?” Approaching something with empathy is a great jumping-off point to find a compromise.
The other challenge is just doing my part to keep up our culture. I have truly never encountered someone here who didn’t have time for me—often, they go out of their way—and I want to give back as much as I get. I think it really comes from being part of the open-source community; you can’t be successful there without helping others and being a developer advocate. But it’s one of the things I love about Confluent. I feel comfortable talking to anyone, including the most senior people, because they’ve made it that way. That motivates me not just to be a better engineer, but to be that kind of person myself. I want to pay it forward to the new people coming in and support them as much as I can.
Being on the Global Kafka team, I’m excited about trying to build a true multi-region Kafka. Confluent is moving toward solving for use cases rather than for technologies—so instead of Cluster Linking being the product, it’s migration or disaster recovery or multi-region replication as a service. I love that. Kafka started as something for software engineers, but now we’re taking it to a more user-centric place, where almost anyone in a company can use it. The customer doesn’t need to understand everything we’re using behind the scenes. What matters is that we give them a great experience.
Sanjana Kaundinya is a Senior Software Engineer who joined Confluent in 2019 after completing her bachelor’s degree in computer science at Cornell University. Since joining Confluent, she has worked on a variety of multi-region technologies including Replicator, MirrorMaker 2, Multi-Region Clusters, and Cluster Linking. As a member of the Global Kafka team, she was one of the original engineers to work on Cluster Linking and helped make the product generally available on both Confluent Cloud and Confluent Platform. Apart from software, Sanjana is an avid dancer with over 10 years of training in Indian classical dance.