About the guest:

Parveen is a UK-based senior quality analyst consultant at Thoughtworks. Being a quality advocate, she believes delivering high-quality products is everyone's responsibility. She loves collaborating with teams and optimizing processes, tools and methodologies to enable the creation of high-quality products. She is also an international speaker sharing her stories and experiences in testing to inspire other people around the globe. In her spare time, she plays the role of wonder woman for her two lovely kids. 

Find our guest on:

Parveen's TwitterParveen's LinkedInParveen's Blog

Find us on:

On Call Me Maybe Podcast TwitterOn Call Me Maybe Podcast LinkedIn PageAdriana’s TwitterAdriana’s LinkedInAdriana’s InstagramAna’s TwitterAna’s LinkedInAna's Instagram

Show Links:

ThoughtworksExploratory Testing

Additional Links:

Blog Post: Observability for TestersBlog Post: Why Observability Matters for TestersO11ycast Podcast: Ep. 26, Unknown Unknowns with Parveen Khan of Square Marble TechnologyParveen’s Speaking Engagements

Transcript:

ADRIANA: Hey, everyone. Welcome to On-Call Me Maybe. I am your host, Adriana Villela. And with me, I have Ana. I'll let Ana introduce herself.

ANA: Hi, y'all. My name is Ana Margarita Medina, and I'm very excited. We're going to have an amazing episode today. So I'll switch it on back to our co-host, Adriana.

ADRIANA: Today with us, we have Parveen. I'll let Parveen introduce herself. Why don't you tell us a little bit about yourself and also how we connected, how we found each other.

PARVEEN: Yeah. Hello. So I'm Parveen Khan, and I'm a senior QA consultant at Thoughtworks. Yeah, I kind of work as a quality advocate, all about quality with the teams. And also, I share my learning experiences across trying to speak at different conferences and writing blog posts. And I share some of my thoughts being on a podcast. I'm super excited to be here today. 

So I still remember such a simple conversation, like, I think I read a blog post about observability myths. So that's where I think I thought, okay, this is so cool. It's such an easy way to explain all this all-around observability. Because I know there are a lot of different blog posts available, a lot of resources there. But then I think when I read that blog post, I was so fascinated that I think I reached out to Adriana just on LinkedIn, saying that, "Oh, this is so cool. You've given such great information out there in such a simple way so that everyone can understand." So I think that's how we kind of met.

ADRIANA: Yeah, totally. And I love that we were able to connect through LinkedIn. And it's funny because you're not the first person that I've connected with through LinkedIn because of the blog posts that I've written. So I really appreciate it when people reach out to want to chat about these types of things. And I'm super stoked that the blog posts on observability resonated with you. 

And after you reached out, we booked a meeting to talk about observability and QA. And you sent me on this whole, like; you opened my mind to this whole new way of looking at observability and QA. Maybe why don't you talk a little bit about how you started applying observability as part of your role?

PARVEEN: For me, I think it was never like, okay, this is what observability is, and this is how you can use it, and this is how I can try to make use of it as a QA; it was never like that. So I think it was a way of me trying to learn different challenges at work. I remember I was working with one of the teams. And I came across a lot of challenges, which I was unsure of why these things are happening. We kind of had no information at all. It was interesting, like, I was just trying to read a lot about observability at that point of time.

I just came across this new term observability and then how it kind of connected between...sometimes it's like you cannot understand when to use it until you are in that situation. So I think for me, it was more of like I was in that situation where I realized that, okay, this is the reason why we need observability. So I think that's where my learnings and my understanding started from, and then that's where I started to learn a little bit more about it. 

Because I think for me, initially, it was more of like, okay, this has nothing to do with me. I don't know if this is something as a QA I can make use of it, or I can try something on it. But I think the more I tried to learn and explore more about it and found those challenges on the product that I was working on; I think that's where it opened up a lot of possibilities for me, okay, I think this is how I can use this being a QA, and this is how I can try to add some value from a QA perspective. So I think it started in that sense for me.

ADRIANA: That's so cool. And I think it's interesting, too, when you talk about trying to understand what observability is because I don't know about you, and maybe Ana, maybe you can relate to this as well. Like, when I first heard the term observability, and it was, I would say at least like two years ago, I swear to God it took me so long to wrap my head around what it was. I don't get it; I don't get it. I feel like it's something that's important, something really cool that we need, but I don't get it. And all the definitions were so academic, and I'm like, ah, what the hell is this thing?

ANA: I can relate 10,000% about everything that was just said in the last few minutes. Because as Parveen was saying, that part about learning you hear a word and you're just like, what is this? This sounds interesting. Let me go research and see how this ties into the world as I know it and what you kind of call a mental model. And then you start asking more questions, and you're like, oh, idea, now this all starts clicking. And that's what a lot of DevOps means to me, in my opinion.

You hear the word DevOps; you have to kind of come together, and it's like, dev, ops, collaboration, communication, tada, we have amazing things. And with observability, I felt like that's a lot of what the space has brought. For me, I have that similar experience where I heard the word, and I was like, this means nothing to me; move the page, keep on going. I come from chaos engineering, like; that was my introduction to systems, chaos engineering infrastructure. And I was working at Uber during that time, and we had Jaeger. 

So I started learning about tracing without knowing anything about observability. And I was like, we had our internal tool for metrics, and we had M3 and Jaeger. And then all of a sudden, I was like, I like dashboards. These things [laughs] make a lot more sense. I come from a perspective where I was like, I understand observability to the point that I need to, but I don't need to dive into it. 

And then I'm coming back to the table four or five years wiser. And it's like, oh, actually, the more we know of our system, the better we're able to understand the system that it engages with on a day-to-day basis of our users. And what happens internally is when we have our very complex architectures of like system 1 calls database number 20 but then goes back to database 1, what happened in this time span of five seconds? [laughs]

ADRIANA: Yeah, totally, totally. And for me, it was like this, oh, I get a holistic view of my system? Because from my personal experience, I remember back in the day, even seven years ago, I remember I was helping troubleshoot an application. It was a vendor application that we had running in prod, and it was slow. I talked to the database person, and they're like, "No, the database is performing fine." I talked to the network person, "No, network is performing fine, hard drive's performing fine." I'm like, "Guys, something's not working, [laughs] but everyone says that their thing is running fine. What the hell is going on?"

And I feel like observability unlocks this new level, expert level where all of a sudden you're like, oh my God, finally. I can find that little needle in the haystack. I have that visibility into the thing that's not working for me properly. Parveen, when you and I were chatting initially, that was part of the aha moment for you. As a QA tester, you're like, oh, I know what's wrong.

PARVEEN: [laughs] Yeah, exactly. I think it's about figuring out how to know where things are wrong. It gives you the ability to look for some more information, look for trying like, okay, it's not about just saying...so as a QA, when you just go to the developer and say, "Oh, something is broken," it doesn't give you anything. Okay, something is broken, but what exactly is broken? So I think as a QA, I'll be in a better position to give a bit more details around what are the things happening behind like, you know, what has broken. 

So it's not like me giving the solution of oh, here's the thing that is broken, but trying to give more information around okay, I've observed these kinds of things, or I have noticed these things when I was trying to see where something has been broken. So I think it's about trying to help navigate or trying to give a bit more information to the developers.

ADRIANA: Yeah, totally. When you and I chatted...I had a role as a QA tester early in my career. And I remember when I was testing, I'm like, it's broken, but I don't know why. And I'm like, I don't want to just sit here until the developers figure out what the hell's going on to find out what the problem is. I want in. I want to figure out if there's something I can do to find the problems. So I wish that observability had been as mature as it is now 20 years ago when I started my career. That would have been so awesome. [laughs]

ANA: I have a similar experience of being put in a QA position where just like, here are the screenshots, here are the steps I took. Here's what didn't happen. Oh, you won't get back to me for another week? Yeah, let me try to recap all my notes when you do get back to me on why it was pink and not blue. I don't know [laughs] what to tell you; the system just didn't handle it properly. So I think the more that we're able to know the right information at the right moment, it's also critical. It is not just about knowing this information. 

Because when we talk about having to fix something when it really matters, like being in those moments of your customer is reeling bad, how do you make sure that we can get them the most assistance? Or we're really close to launch; how do we make sure that we finish the backlog of your tickets? So yeah, that moment where it's like, oh, it's crunch time. Why didn't this work? Why is Bobby mad at me right now? And why is my supervisor, Veronica, still questioning, like, "Hey, why don't we have this case closed in so long?" It's really getting that context when you really need it.

ADRIANA: Yeah, I totally agree. And it doesn't apply just to the pre-prod QA, either. I mean, we're basically testing when we're in prod every day because our users are always finding new and interesting ways of using our systems in ways that, as a developer, you're like, oh, I didn't even think that that could be done that way, so you get the insights both in the pre-prod and the post-prod, I guess, areas for your system. And that makes it very valuable on both ends. 

And I think having the additional insights in the pre-prod stages won't make a perfect product, but it will certainly help to get rid of some of the wrinkles before you go into production, which I think is awesome.

PARVEEN: Yeah, it's more of like, you can learn from the production system because that's where the actual things happen. That's where our users are using our features. I think it helps us in trying to understand not just how our systems are behaving in the production system but also how our users are using our production system. And then using that as feedback and trying to add that into our process. 

And then I like to say this, like; you cannot shift right until you shift left. You have to shift your mindset to the left and try to think. That's why I like to say that observability is more like a mindset change. And it's more of like; you try to think whenever you're releasing the feature to production, it's not about keeping your fingers crossed and saying that "Oh, I hope everything goes well." It's about asking those questions beforehand and saying, "Okay, if I release this feature to production, how would I know something has gone wrong?" 

Again, I'm not trying to say that observability is rocket science, and you will know everything with it, but still, you are prepared. At least your aim is you don't want to fail. But your aim would be something like, even if you fail, do you know how to get some information about it instead of blindly going around and trying to see, like, I don't know where to go kind of situation, right? 

ADRIANA: Yeah, totally. It's the idea of it's enabling you to fail fast, right? 

PARVEEN: Yeah.

ANA: Beautifully said. Like, fail fast, get comfortable with failure because if we're not comfortable with failure, you're going to end up having more failures in the moments that matter to your customer or any of your users. 

And it's a lot of what I've been iterating for the last four years but from a whole different angle of like, the faster your engineering team is able to get comfortable with failure, the easier it's going to be when your pager goes off when that incident gets started because your team already knows what to do. They know, oh, this is how my observability tool works. And hopefully, you're working in an organization that only has one, not like five that you have to be like, [vocalization] was it this one? Or did we migrate this service to our new one? 

And it's those little things that really add those extra minutes, those extra hours to an incident getting closed or any type of ticket being closed and a customer being happy once again and staying as a return customer with oh, what I came to shop for actually got into the car. I got a tracking code, perfect. Or where is it that it's failing, and how can we make it be better? 

And as Parveen was saying, very much of injecting it and, like, knowing what's going on the closer on the shift left is the most important. I'm a huge fan of having perfect, ideal world DevOps in an SRE world across all your cycles, but I know that's extremely expensive. So it's like having it in pre-prod and staging; the more context and observability that you have, the more comfortable you are with failure. You're going to start seeing that your team is going to have a lot less of those really long outages when you are in production. 

And as you see organizations start doing the work, you're just like, I'm cheering for them on the sidelines. Or when you hear about a really expensive outage, you're like; we can do better. We got this. And now you're just trying to herd like 500 engineers at a big org, like; I'm cheering for y'all. Like, #hugops, you got this.

ADRIANA: It's so true. I think that goes with the mindset shift that Parveen was talking about earlier, where making it a safe space for people to feel. I think observability helps provide that safe space. But then I also think that it's up to leadership to allow for that safe space. 

Because I've been in organizations where I had a team that they were managing a tool and there was an outage, and they were able to figure out right away the root cause of the outage. But upper management was pissed off that there was even an outage at all. And it's like, we should be celebrating the fact that the team identified the outage quickly, and they were able to resolve it. So why are we punishing them for that? We should be celebrating that if anything.

I feel a lot of large enterprises tend to focus on the wrong things. So creating that safe space for people to fail and enabling that fast recovery through observability is what will, I think, rid our industry of basically on-call PTSD because I think that's really what it boils down to. It's not so much the act of being on call, but it's the fact that you're woken up in the middle of the night. 

Or even if there's an outage in the middle of the day, you've got five jillion people breathing down your neck saying, "What's wrong? What's wrong? What's wrong?" Rather than like, "Hey, guys, chill, we got this. We've got the information that we need to solve this problem. Just give us the space, the mental safety, if you will, so that we can handle this effectively."

PARVEEN: Yeah, we can relate this to those four key metrics in DevOps that we talk a lot about. Like, if this was the case, if you don't allow people to fail in the team, then we would have all those four key metrics that we have in DevOps. Like, one would be, oh, never have production bugs or never fail as one of the four key metrics, but it's not.

One of the four key metrics in there says that the recovery...how quickly can you recover? So it explains. It all ties back in very well that, okay, it's not about not completely failing at all or not shipping any bugs at all but trying to learn from those and trying to recover from those and trying to improve your existing visibility of your services by using the feedback from those kinds of issues or failures or those incidents.

ANA: I'm someone that's coming into observability world somewhat new. As I mentioned for me, I've worked very close to the space, but I don't feel like I know observability. How often are you looking back at your history of your observability traces when you're coming in from the QA testing perspective? How back do you go to see how a behavior happened? And when do you start thinking my system has changed a lot since that moment that we captured this trace?

PARVEEN: Usually, what happens is when I'm trying to use, it's more of whenever there is a new feature, so I try to work around those. So it happens whenever I'm testing that feature. I kind of look at is there any kind of new logs that we require? Or are there any kinds of logs that are helping us as a team to understand what's happening? 

And then I think at that point of time, with those traces, I think I can see the whole entire history of how those requests are going across. And that's how I kind of look at, oh, I've never seen this. I've never seen this request, or I've never seen this kind of a log before. What is happening? Has something changed? 

I think that's how I come across something that was already existing. So it's not exactly like, okay, I have something like, okay, today, I'm going to look at the old logs or traces and then figure out if something is wrong. But it's all about when I'm trying to explore the features or doing some kind of regression or anything. So that's where if I notice something different, that's how I come across is, has anything changed in terms of these different logs, or traces, or anything like that? Yeah.

ANA: Nice. Yeah, getting to answer what has changed is usually the most important.

PARVEEN: Yeah, yeah. Especially as a tester, when I'm exploring, I think it's like, you ask a lot of questions. Oh, what is happening? Oh, maybe can I just look at there and see what's happening? So I think this is what...observability allows you to ask those questions and figure out those answers. So I think it ties both together to see what has changed and how things are working.

ADRIANA: Yeah, it's so true. I was wondering if you could elaborate a little bit more; like, your perspective on observability is especially unique because, as I recall, you're an exploratory tester. So it's a very different take on what I would say, I guess, is the traditional thought around observability and testing even. So can you talk a little bit about that?

PARVEEN: What I try to do is, like, it's not just about leveraging what is existing and just using that as is. But what I try to do is when I'm testing any feature or when I'm exploring the system, if I kind of noticed something that, oh, I don't understand what's happening here, or I think that we need to add a little bit more information in terms of maybe it could be a log or a metric..., or it could be maybe we need some more events around this so that we know what's happening and then we can capture as a business metric. 

So it goes around those kinds of exploring and figuring out what do we need more? And then feeding back to the team saying that, "Oh, maybe..." it's more of a collaboration around this. It's not like, okay, we have this observability in place and tick a box, that's it, all done. So it's more of trying to get into as a practice, as a team to understand it's not exactly doing specific testing for logs or metrics or any kind of observability but making use of exploratory skills and trying to understand, do we have enough information going around for this while testing this feature, or this API, or this request?

So I think then feeding back to the team and pairing with the developer to improve that and see if that helps. And it's not about adding too many logs or too many traces just because we need them but also making, like, coming to a practice where which is a business-critical journey where we might need some kind of extra information when we release this particular feature. And then maybe I think this is how I use my testing exploratory skills and more pairing with the developers and trying to get these things done. 

And this is how I feel as a tester, we add value, not just by using what is existing but also trying to improve the instrumentation of the existing system. And it's all about asking those questions while you're testing, talking to your developer, and trying to understand. So I think that's how I like working and kind of improve what is existing and add some more value as a tester in terms of observability.

ADRIANA: Yeah, that's so awesome. And I think that makes such an awesome point and feeds so nicely into the definition of observability, where you don't have to add more code to your system because you've got all the information that you need to understand what's going on in the system. It's like, it's so intuitive when you hear it, and you're like, duh, that's so obvious. But I don't know that we have enough of that happening. 

And if anyone out there is listening who isn't doing this, start doing this because this is the coolest application of observability. And I think will save your on-call teams a lot of PTSD and stress by ensuring that you have everything that you need to make your system observable.

ANA: I think the whole on-call PTSD is huge. And there still needs to be more conversations around it in general. Like, how is it that we can start doing work to make it better? We talk a lot about how to have blameless post-mortems and be able to really take the blame off when we're reviewing what actually happened with our teams. But there's a lot of work that can be done pre-outages that actually sets you up for psychological safety that's ongoing that is the opposite of tech debt. Like, this work is actually going to have positive impacts. And it all comes in into your psychological safety. 

This is usually like my bread and butter of being able to be in DevOps and SRE. I feel like I'm constantly preaching the human layer of the work that we do. And a lot of it comes into get comfortable with failure fast. But how is it that we can make our systems better earlier on in that shift-left process that we have enough practice when it comes to hey, I really need to understand why my top 10 users are actually having issues with the images that are loading up on the application.

Like, getting a chance to answer that question or getting a chance to sit down with an engineer and go through your tickets and being like, so if we look at the trace, the model number of the phone that they were using is actually the reason that we're having issues with this operating system. That context just creates so much value to an engineer and answers the right questions because everything is binary, ones and zeros. And we figure out things that way versus these things of like; we don't have enough information. We need to talk to another team. We actually are not collecting this because it's too expensive.

PARVEEN: Yeah, exactly. And I think it's about you'll have to think about who is the end user. Who is the end user of those logs or those metrics or whatever instrumentation you're trying to build? And it's about when you're trying to add those; it shouldn't just make sense to you when you're doing it. But if someone else is debugging your service when you're away, just as an example, then they should be able to understand without having enough context of your service. I think that's how it can be a bit helpful.

ADRIANA: Yeah, totally. And I would say as testers; you are basically put in that position because, yes, you understand how the system is supposed to work functionally speaking, but you're not privy to the code. So to have that, you're almost like the first layer of defence, if you will, because it's basically this idea of I don't know the nitty-gritty, but observability will help me understand what's going on under the covers. And I think that's one of the things that I find so powerful about observability myself so...

PARVEEN: Yeah, definitely.

ANA: I think the question of, like, this is the information, the context of what you're supposed to know about how a system is going to work goes back to the earlier point of psychological safety. You have the mental model of your system of, like, I know what I know because of all the knowledge that I've put into my brain, all the outages that have dealt with, all the applications that I've worked with. 

But we all come from different projects, from different companies, from different teachers that have taught us this information, or we taught ourselves that you have different ways of thinking, that a database talks to a caching layer. And if you're not able to see how that process actually happens because you don't have the source code, or you have an architecture diagram that's outdated, you're not having a place where you engineers can learn. 

And all of what engineering is is exploring, researching, doing a better thing. We push the industry forward two years. We revisit all the work that we've done. We grow our user base. We go back, and we research. It's a lot of the foundations of what our job is. When we get down to our nitty-gritty of what we're actually trying to solve, it's just asking another question and answering again and collaborating with others. So that context is huge.

PARVEEN: Yeah, I think it's kind of a living document, right? [laughs] It’s never going to get outdated, kind of, yeah. 

ADRIANA: Yeah. Yeah. It's the best kind of documentation, really. It’s the one that you're not having to constantly update.

PARVEEN: Yeah. [laughs]

ANA: Back to that comment about documentation, I live on the SRE side, so fighting fires versus the tester, which is a lot more pre-stuff sometimes. And it makes me think of, like, how do we make sure that we're collecting the proper amount of information from traces and updating that documentation? How do we build that into more cadences with teams? Because it's one thing to write it in the bug that your runbook says this. But we don't go back and actually say, oh, we learned [laughs] that the system doesn't interact this way. This is in our backlog. Look at your ticket, 3,044. [laughs]

PARVEEN: I think documenting...[laughs] I think we can never document enough. I think I can say with every new possibilities, that kind of happen. But I think it's more of like trying to, as I said, trying to learn from those incidents and then having can we take away some kind of action with those incidents and then improve what we have? Yeah, it's not about having too long of those runbooks and saying that what do you need to do whenever this kind of an incident happens? You cannot have all those lists of every other possibility out there. 

But I think whatever we learn, if that is something very critical or something that is new that we learned from the production system of how the users are using or how the systems are behaving, that's how we try to document a bit of information and then try to question ourselves that okay, why did we miss that? What kind of information are we missing that we already have in terms of logs or traces? So then, stepping back and trying to figure out what was exactly missing, and how could have we found that if we had this extra data within our systems or within our services? 

So I think it's all about trying to step back a bit and ask those questions about what could have been done better. What could have been added a little bit more? And those missing pieces and trying to look for...I think once we get into that continuous process, I think it's about not a one-time thing, like, once you have this or you learn and then forget about it. It's a continuous process of trying to ask those questions and trying to improve. 

And it's like, every team member, again, it cannot be done by one person all the time. It's like, every team member has to be involved in this and asking those questions and reminding that, okay, how could we have done this better? 

And what happens is sometimes when it has taken a long time for a feature to develop, so we're like, okay, let's deal with this logging and monitoring or anything like that later on. Let's just release this. And being very strict about things like, okay, we shouldn't be doing that. Because it's about the cost of, do you want to release the feature without knowing when something goes wrong and then spending a lot of time in debugging? Or do you want to spend a little bit more time and try to do it proper way and then release it? 

So I think it's a very continuous process of continuously adding those documentation around learning and adding to the runbooks and improving the existing data that we have.

ANA: It seems like when we signed up to work in engineering, we just signed up to continuous lean. Like, that was our job description, continuous learner engineer.  

[laughter]

ADRIANA: So, so true. Parveen, what you were just saying made something click in my head which is like, oftentimes, products are released with known bugs. And I was thinking in the context of having a properly observable system; it almost makes it okay to do so as long as you have the right amount of instrumentation in your code to basically enable you to troubleshoot those issues in prod. So it's like, yeah, I know I've got this bug. But if blah happens once I release it, it's okay because I've got all the tools that I need to figure out what's wrong, which, for me, is kind of a mind-blowing realization. 

As you were talking, I'm like, holy crap, this is the coolest thing ever. Because all of a sudden, it's like, you've decreased the risk of releasing bugs into prod. Like, yes, you're still releasing those bugs into prod. But you've decreased the risk of the repercussions of doing that as long as you have the proper instrumentation. I think you stating, like, yeah, you got to put your foot down with regards to instrumenting properly before releasing that's almost more important than squashing those more minor bugs before releasing into prod. So that was my aha moment for today. So thank you for that. [laughs]

ANA: Wait, did you just learn continuously?

PARVEEN: [laughs]

ADRIANA: I did, on the fly, just learning.

[laughter]

ANA: That's amazing. As you mentioned, putting in known bugs into our users’ hands and having the proper amount of instrumentation also made me think about A/B testing. And a lot of the work that we do with observability and instrumenting properly that really allows us to A/B test properly in terms of, like, we're really trying to make sure that we're thinking about our customers the best ways. So we take all of our users, and we give 50% of them menu type A, and we give the other 50% menu type B.

We were actually able to collect information about the frame size of their windows and what buttons they’re actually clicking, and what that actually does for whatever metric this A/B testing project actually needed. Are we trying to get them to login faster? Are we trying to get them to sign up for rewards? Thinking about banking apparently [laughs] in my head today. 

That really does allow us to get the information to those engineers faster, which makes our development process cheaper, which means that we're also iterating with the idea of we can fail fast, but we're also saving money by doing so, which goes into the whole conversation of cost of downtime, which is probably the reason we all have jobs. [laughs]

PARVEEN: Yeah, I think that's a good point to say. With observability, I think you're in a position where okay if you want to release to certain percentage of users using any kind of, like, it could be A/B testing or toggles or anything like that. And then release to some percentage of people, and then you can sit back and see how they're using. Are there any unknowns? Or is everything failing? Is everything on fire? 

But still, you are in that power of seeing things through and releasing it to some certain, just a smaller number. And you have that ability to look through and make a conscious decision; okay, everything looks fine. Let's go ahead and release to the whole entire 100% of the users, and again, the point of how observability empowers you to make such decisions.

ADRIANA: It's true. Basically, observability plus feature flagging, power couple.

ANA: I was wondering who of us was going to be the first person to drop feature flagging into this conversation.

[laughter]

ADRIANA: It was only a matter of time, right? [laughs]

ANA: I think as we have this conversation, one of the things that came up to me, Parveen, what has been your biggest takeaway, aha moment as an explorer, a QA when you're doing observability in the last six months, 12 months?

PARVEEN: One of them, like, I think... 

[laughter]

ANA: Fine, you can choose two. 

[laughter] 

PARVEEN: No, no, I think I'll just choose one of them. I think it is those moments where I just don't have to just rely on the UI and just that little network tab to look through the request. But I like getting used to these kinds of new tools to access these different data and different visualization. And I think it kind of gave me; I can say pair of glasses to view through the information that is going through. [laughs] That was my aha moment where, okay, oh my God, you don't have to just click through and look for how things are. 

If something is not working on the UI, you were kind of stuck or like, oh, you don't know what's happening. But having this ability to use these kinds of tools and then knowing how to figure out some kind of information around when you're trying to explore that's kind of pretty cool and powerful to me. That was kind of an aha moment. Because as I said, in the beginning, I didn't know how it could be useful for me or whether it's something that is definitely nothing for me. 

So I think the more I was trying to learn about it, I think this was one of the things that I realized, and it was [laughs] something that I really liked. And I was like, oh, wow, I know what's going on. It could be something right or wrong. It doesn't have to be exact right thing that you know, but still, you have that ability to look through. That's why I said I kind of got my cool pair of glasses when I'm trying to explore by having this ability. [laughs] So I think that was my aha moment, yeah.

ADRIANA: That's awesome. Now we have to make observability sunglasses.

[laughter]

ANA: Please. I'm literally writing down swag ideas as we speak of observability.

PARVEEN: Oh my God. That’s so good. [laughter] I know it's a very weird comparison or an object that I used. But I think that's how I felt literally. [laughs] Like, okay, I know what's happening.

ANA: I mean, talk about one of the biggest aha moments people can actually have. I used to remember the first time I put on glasses as a little girl, and I was like, whoa, the world is so vivid and large. [laughs] Like, what do you mean? It was like, before glasses, after glasses. 

And I think a lot of the conversations that we're having really bring that up to me where it's like what my life was like as an engineer, tester, QA, SRE without observability. And then the moment we discover observability and all of a sudden, that context just completely changes everything to the way we have a holistic view of our systems now.

ADRIANA: Yeah, it's like this mind-blowing transformation. You're like, oh my God, how did I live my life before this, right?

ANA: You mean I learn better when I have glasses? That makes a lot more sense.

[laughter]

PARVEEN: Oh my God.

ADRIANA: I can totally relate. I am blind as a bat. [laughs] So, as a long-time contact lens wearer, I very much appreciate the extra visibility that my contacts give me, just like observability. [laughs]

PARVEEN: Oh God.

ANA: I hope any of our listeners that catches this quote tweets it out and gives us all a shout-out on Twitter.

[laughter]

ADRIANA: I feel like this podcast episode is so full of awesome nuggets. I just want to tweet them all out. One final thing, because I know we're coming up on time, that I would like to discuss briefly is Parveen; how did you get buy-in from your fellow testers?

PARVEEN: Again, you cannot say, if you're working on a team, everyone has to agree to have that agreement. But I think it's more about I try to share, try to share it as, okay, this was my pain point. And this is how I learned, and this is what is helping me. So I've been using those pain points and trying to influence those. I think that's more of trying to get other testers as well to get some interest into it. And I think it's more of like, everybody has different views or different things that they can use.

So I think I just tried to share and, again, coming across as using those pain points as how this thing has helped me or how I'm trying to use this and trying to keep sharing about this. And I say that, oh my God, I talk all about observability. [laughs] I kind of feel myself like that. So I think it's about trying to talk and trying to, you know, how I'm using it, and maybe you can try as well. So it's just about creating that awareness and trying to influence and trying to share those experiences as well.

ADRIANA: You know, when you got your first fellow tester to see the magic, how was that?

PARVEEN: I mean, I feel like still, it's a lot of different testers, like, I don't know, I can't talk about everyone, but I think especially who I have worked with previously as well. So I think it's more of like slowly getting used to and trying to understand. It's like everyone they need that aha moment when they're trying to use this, and that's where they have to figure out, oh yeah, this is where I can use it. I think it needs to be sparked into their way of thinking, the way they're using things. 

I think that's where I think one of my tester that I was working with when I walked him through everything, okay, this is how you can do it. And he was like, "Oh my God, I thought, okay, I need to build some very complex queries to access these different data, the logs, for example. So that is the reason why I was very resistant to go into that lane of observability." And I was like, "No, you don't have to. It doesn't have to be a very complex way or complex process for you to be able to access this. If that is very complex, then there's no whole point of having ease of observability in place." 

So I think it's about trying to pair and show it's easy. Because when you hear this, I think it's more of like, oh, it's nothing to do with me. It's like developer thing or an SRE thing, so it has nothing to do with me, so let me not look at it. With one of my testers that I was working with in my team, I think he was like, "Okay, I didn't know that it was so easy to access this information." And it's not like you will get to learn or understand how to use this or how to leverage this within a day or two; it needs a bit of practice. 

And you can't practice that; oh, now today I'm going to learn all the logs about my system. You can't do that. It's about you need to have those real scenarios and the way you're trying to explore a feature and then trying to learn together. It is never that; oh, today, I want to learn about logs. It's never going to happen. So I think when I walked him through all of these, I think it made him, like, oh, yeah, I know. Someone, when they say that, oh, I definitely want to give it a try and see how that can help me, so I think I feel like okay, I've influenced enough to start thinking about it. [laughs]

ADRIANA: That's awesome. It must have been a great feeling, too, to pass that aha awesomeness to somebody else, and then he can pass it to another person.

PARVEEN: Yeah, yeah, exactly.

ANA: You're passing on the baton and the sunglasses to the next person to help you advocate more. [laughter] We're making some form of glasses a thing. I'm just saying that on record.

ADRIANA: Oh my God. We so are. We so are. 

[laughter]

PARVEEN: Oh my God, yeah.

ADRIANA: Cool. I guess that is a wrap. We're coming up on time. Parveen, thank you so much for joining us. As always, it's such a delight chatting with you. I feel like every time we talk, I get some new insights into this topic that, you know, I thought I'd dug in enough, but clearly, there are more and more nuggets. So thank you for coming on and sharing your insight. 

I think it would be awesome if we could, at some point, do a follow-up with you as well to see where things are at as you continue evangelizing testing and observability. Thank you for opening our eyes into this really awesome and exciting practice of observability. I think it's so valuable.

PARVEEN: Thank you so much, actually. Thank you for having me. I think it was such a great conversation with both of you. Even I got a lot of learnings and lots of nuggets for me, [laughs] and I think it was awesome having this chat with you both.

ADRIANA: Awesome. Awesome. Cool. Well, so for On-Call Me Maybe, I'm signing off. I'm Adriana Villela with my awesome co-host...

ANA: Ana Margarita Medina signing off too.

ADRIANA: And we'll see you on the internet.

Twitter Mentions