Artwork

Contenuto fornito da Scriptorium - The Content Strategy Experts. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Scriptorium - The Content Strategy Experts o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.
Player FM - App Podcast
Vai offline con l'app Player FM !

Enterprise content operations in action at NetApp (podcast)

23:10
 
Condividi
 

Manage episode 441408495 series 2320086
Contenuto fornito da Scriptorium - The Content Strategy Experts. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Scriptorium - The Content Strategy Experts o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

Are you looking for real-world examples of enterprise content operations in action? Join Sarah O’Keefe and special guest Adam Newton, Senior Director of Globalization, Product Documentation, & Business Process Automation at NetApp for episode 175 of The Content Strategy Experts podcast. Hear insights from NetApp’s journey to enterprise-level publishing, lessons learned from leading-edge GenAI tool development, and more.

We have writers in our authoring environment who are not writers by nature or bias. They’re subject matter experts. And they’re in our system and generating content. That was about joining us in our environment, reap the benefits of multi-language output, reap the benefits of fast updates, reap the benefits of being able to deliver a web-like experience as opposed to a PDF. But what I think we’ve found now is that this is a data project. This generative AI assistant has changed my thinking about what my team does. Yes, on one level, we have a team of writers devoted to producing the docs. But in another way, you can look at it and say, well, we’re a data engine.

— Adam Newton

Related links:

LinkedIn:

Transcript:

Sarah O’Keefe: Welcome to the content strategy experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage structure, organize and distribute content in an efficient way. In this episode, we talk about content operations with Adam Newton. Adam is the senior director of global content experience services at NetApp. Hi everyone, I’m Sarah O ‘Keefe. Adam, welcome.

Adam Newton: Hey there, how are you doing, Sarah?

SO: It’s good to see and/or hear you.

AN: Good to hear your voice.

SO: Yeah, Adam and I go way back, which you may discover as we go through this podcast. And as those of you that listen to the podcast know, we talk a lot about content ops. So what I wanted to do was bring somebody in that is doing content ops in the real world, as opposed to as a consultant.and ask you, Adam, about your perspective as the director of a pretty good-sized group that’s doing content and content operations and content strategy and all the rest of us. So tell us a little bit about NetApp and your role there.

AN: Sure. So NetApp is a Fortune 500 company. We have probably close to 11,000 or more global employees. Our business is primarily data infrastructure, storage management, both on-prem. We sell storage operating system called ONTAP. We sell hardware storage devices, and we are most importantly, think, at this day and age, integrating with Azure, Google Cloud Platform, and AWS on first -party hyperscaler partnerships. My team at DENAP is… I actually have three teams under me. The largest of those three teams is the technical publications team. The other two teams globalization responsible for localization translation of both collateral and product. And then finally, and most new to my team is our digital content science team, which is our data science wing. Have about 50 to 53, think, employees at this point in my organization and all told probably about a hundred with our vendor partners.

SO: And so I think we all have a decent idea of what the technical publications team and the globalization teams do. Can you talk a little bit about the data science side? What does that team up to?

AN: Yeah, that’s a thank you for asking that question. So about two years ago, I was faced with an opportunity to hire. And maybe some of your listeners who are managers are familiar with that situation, right? I hope they are, rather than not being able to hire. I took a moment and thought a little bit more about what I needed in the future. And I thought a little bit differently about roles and responsibilities, opportunities inside NetApp and the broader content world and decided to bring in a data scientist. And then I thought a little bit more about, well, there are other data scientists at NetApp. Why would I need one? And I thought a little bit about the typical profile of the data scientists at that time at NetApp, mostly in IT and other product teams. Those data scientists were primarily quantitative data scientists coming from computer science backgrounds. And I thought, well, you know, we’re in the content business. I want to find a data scientist who is a content specialist and who has a background in the humanities and who also has skills in core data science skills, emphasizing, for example, NLP. And so that was my quest. And I was very, very fortunate to find a PhD candidate in English who wanted to get out of the academy and who had these skills. And it’s been an incredible boon to our organization. We’ve even hired a second PhD in English recently. And Sarah, since you and I are friends, I’ll say one was from UNC and one was from Duke. Okay. So we don’t have to have that discussion here. I’m an equal opportunity person. Although I did hire the UNC one first, Sarah.

SO: I see, I see. So for those of you that don’t live in North Carolina, this is… I’m not sure there is a comparison, but it is important to have both on your team. And I appreciate your inclusion of everybody. It is kind of like… I’ve got nothing.

AN: Yes.

SO: Okay, so you hired some data scientists from a couple of good universities. Or do they get along? Do they talk to each other?

AN: Fabulously, yes. No petty grievances.

SO: Okay, just checking. All right. So how do you, in this context then, what does your environment look like? What kinds of things are you doing with the docs team? And what’s the news from NetApp docs?

AN: So maybe a little bit of background actually, and you and I have talked about this previously, but we used to be a data shop. And then as things sped up inside our business with the adoption and development of cloud services at NetApp, we found that some of the apparatus of our data infrastructure, our past practices weren’t able to keep up to speed of the cloud services that were being developed. I think this is actually, I’ve talked to other people in our business, this is a very common situation. We handled it in one way. There are many ways to handle it, but the way we chose to handle it was to exit data and to move in our source format anyway to a format called ASCII doc, which I always frequently describe as a dialect of markdown. And we went from being a closed system of technical writers working inside a closed CMS to adopting open source. We now work in GitHub. Our pipeline is all open source and we have now contributors to our content that are not technical writers. In some cases, they’re technical marketing engineers, solution architects, and so forth as well as a pipeline of docs that we build through automations where we, for example, transform API specifications or reference docs that are maintained by developers and output those into our own website docs.netapp.com. In addition to just the docs part, my globalization team has been using for many years, machine translation. So speaking to one particular opportunity of being in one organization, when we output our docs and whenever we update our docs in English, they’re automagically updated in eight other languages and published to docs.netapp.com. So we roughly maintain 150,000 English files and you can times those by eight. Is that right? Did I do the math right? Yeah.

SO: Or nine, depending.

AN: Nine. Yeah. Is English the language? Yeah, sure. Let’s count it.

SO: Depends on how we use it. Okay, so you have an ASCII doc, you know, Markdown-ish. Is it fair to call it Docs as Code environment?

AN: So we often describe it as a content ops, environment. I’m not sure if that is, different from Docs as Code, but I think maybe I will accept that as a reasonable description in the sense that, we have asked our team members to think about the content that they’re writing as highly structured, semantically meaningful units of information. I think in the same way I think a developer can be asked to think of their code being that way and the systems in which we write in VS code, many engineers are writing in that.

SO: Mm-hmm.

AN: And of course our source files, as I mentioned, all in our automation and our pipelines are all based on being in GitHub.

SO: And so then you’ve got docs.netapp.com as a portal or a platform where a lot of this content goes. And what’s happening over there? Do you have any news on new things you’ve done there?

AN: Yeah. I mean, very recently, you know, the timing of this is really interesting. We, have been working on a generative AI solution, for a year, Sarah. you’ll recall the, the hype, right? When, when chat GPT exploded onto the, the, into the public consciousness, right? Through the media and, shortly thereafter, we began imagining what it might look like to leverage that technology, those types of technologies to deliver a different customer experience. And we identified a chatbot as being something we thought could add to the browse and search experiences on docs .netapp .com. And we just released that on the 20th of August announced it here internally inside of NetApp on the 27th. So we are literally like 48, 72 hours into a public adventure here.

SO: I take full credit for planning it, even though I knew nothing about any of this.

AN: Yeah. And that was a long time. I think it’s worth noting too. It was a long time. And I think it’s beyond the full dimensions of this, this discussion to talk about why it took so long. But I will say maybe to, you know, the, were early adopters and we felt, we felt the pain and the benefit of being that, you know, it was like, you know, changing the tires on a, on a race car, right? That was speeding around the track. So we had to learn and be responsive and also humble in the sense that there were some missteps that we had to recover from and some magical thinking, I think, at the beginning of the project that was qualified more over the course of the project.

SO: And so what does that GenAI solution sitting in or over the top of the docs content set, what does that do in terms of your authoring process? Do you have any, are there any changes on the backend as you’re creating this content that is then consumed by the AI?

AN: I would say we’re in the process of understanding the full implications of having this new output surface, this generative AI assistant, and fully grappling with what the implications are for the writers. We find ourselves frequently in discussions about audience. And audience is all those humans that we have been writing for and a whole bunch of machines that we now need to think more consciously about, you know, and it’s, we find ourselves often talking about standards and style, but not just from the perspective of, you know, writing the docs in a consistently patterned way for humans to be able to consume well, but also because patterns and machines are a marriage made in heaven. And we see actually opportunities to begin to think of the content we’re writing as a data set that needs to be more highly patterned and predictable so that a machine can consume it and algorithmically and probabilistically decide how to generate content from the content we’re creating.

SO: And where is this going in terms of what’s next as you’re looking at this? I think you mentioned that there’s other opportunities potentially to add more data slash content.

AN: Yeah, actually, if I back up to a detail and I shared, but maybe quickly, you know, we do have writers in our authoring environment who are not writers. They are by nature and by bias sort of, they’re, people who have their subject matter experts, right? And they’re in our system and they’re generating content. But I think that some of the opportunities that, so that was about join us in our environment, right? Join us in our environment, reap the benefits of multi-language output, reap the benefits of fast updates, reap the benefits of being able to deliver a web-like experience as opposed to a PDF. But what I think we’ve found now is that this is a data project. This generative AI assistant has changed my thinking about what my team does. And I think, yes, on one level, true. Yes, we have a team of writers and there’s a big factory devoted to producing the docs. But in another way, you can look at it and say, well, we’re a data engine. We own a large, own, maintain a large data set and the GenAI is one consumer of that data set. But we’re also thinking about our data set as being joinable to other data sets inside of NetApp. And in particular, I work inside the chief design office at NetApp, along with UX researchers and designers. And we’re also more broadly part of our platform team at NetApp, shared platform team. So we’re thinking about how might we join our data with other teams’ data to create in-product experiences that are data-led or data-driven in combination with curated experience. So if your viewers were to be able to see me, I am waving my hand a little bit, not because I’m dissembling, but more because I’m aspiring. And I think there’s a really, really cool future ahead for, a way, Sarah, that I think is super energizing for the writers, right? To see that their work is being reframed, not replaced or changed, right? The fear of writers with GenAI, right, of being replaced. Well, I would offer this as an example of, you know, maybe it’s not such a dismal view and maybe in fact there’s a very interesting future if you reframe your thinking about what you do and the opportunities to join what you do to create different experiences.

SO: And I think it’s an interesting perspective to look at GenAI as being a consumer of the content slash data that you’re putting out. A lot of the initial stuff was, this is great. GenAI will just replace all the tech writers. You’re talking about something entirely different.

AN: I guess I wanted to expand on that because I think we’re actually now hovering on a really important point. You know, what is your mindset? You know, what what how are you thinking about this moment in time? The broad we write you or the broader you us generally write who are in this industry. And, you know, I think we don’t see a great indication that GenAI can create net new content and do it well, honestly. I think you can write it summarizing, it can make your day-to-day, your meeting notes and so forth, Microsoft Co-pilot, right? There are some great uses, but I have not seen convincing, compelling indicators that docs can be written by, at least at the enterprise level, right? Our products are complex. We often talk about our writers as sense makers, right? And I think that we can take advantage of GenAI in the right ways. And I think this is one of the ways that we’re taking advantage of it, which is to give customers another experience. And frankly, also for us to learn a lot about what people are asking and assuming and we can learn a lot and continuously improve.

SO: So what’s happening on the delivery side? Somebody asks for some sort of information and it gives either, it says it doesn’t exist or it gives an incorrect response. Are you seeing any patterns there? What are you doing with that?

AN: Yeah, many of your listeners might have produced products themselves, right, or delivered products themselves and remembered what happens in the first day or two of releasing a product, right? So the timing of this chat is really good. Yeah, in the last couple days we’ve seen I was just talking to a data scientist on my team and I was saying, you know, what I think I see here emerging as a possible pattern is that people don’t actually know how to use these things effectively. That, you know, they ask of it questions that it really could never answer, or they don’t fully understand the constraints of the system, meaning that, well, it’s only based on a certain data set. you know, they don’t know that the data set doesn’t include the data they’re looking for, right? Because it sits somewhere else. You know, we’re modifying our processes to intake feedback. I think there’s a real interesting nexus is, is it the AI or is it the content? That’s the really interesting one, right? You know, was the content ambiguous, deficient, duplicitous, whatever, you know, is that a word?

SO: It is now.

AN: At UNC we use that word, not at Duke. But it is an interesting discussion inside our organization when we receive a piece of feedback, what’s causing it? Is it the interpretive engine or is it our source? And so we’re seeing a lot of gaps in our content, it’s exposing a lot of gaps or other suboptimal implementations.

SO: I mean, we’ve said that in a sort of glib manner, because of course you’re living this day to day and hour by hour, but we’ve said that, know, GenAI sitting over the top of a content set is going to uncover all your inconsistencies, all your missing pieces, all your, you know, over here you said update and over here you said upgrade. That was an example I heard from someone else. And so it basically uncovers your technical debt.

AN: Yeah, beautiful. Yeah, bingo. Yeah. Yeah. Yeah. You’re so right there. Terminology, right? my God. Can you believe how many things, how many ways we’ve talked to, talked about X, right?

SO: Right, and the GenAI thinks they’re different because, or it doesn’t think anything right, but the pattern isn’t there and so it doesn’t associate those things necessarily.

AN: Yeah, your listeners may commiserate with this, or the use of words as verbs and nouns, like cable. We often in our documentation talk about cabling devices. How would a GenAI know that the writer of the question is using cable as a verb or noun?

SO: Mm-hmm. So as you’re working through this and with your, you know, it sounds like two days of go live plus a year or two or three of suffering and a year and two days.

AN: Well, a year and two days, a year and two days.

SO: You know, I think you’re further along than lot of other organizations. Do you have any advice for those that are just beginning this journey and just looking at these kinds of issues? What are the things you did best or maybe worst or would do the same way or not? What’s out there that you can tell people that’ll maybe keep them from, you know, get them, get them or help them as they move forward?

AN: Yeah, but maybe think of it in the old people process systems dimensions. Actually, taking that latter one, systems, I would say beware the fascination of the system without thinking more about the processes and people that are going to be involved in the creation of some kind of generative AI solution. I think, you know, this is as much of an adaptive people process as it is a problem as it is a technical problem. Probably more frankly on the adaptive. And from a process perspective, I’d say, be curious about what you learn. Be attentive to the specifics, but look for the broad patterns in the feedback or what you’re seeing as you develop these solutions, you know, for me, I think I hinted at this before and I think it for me has been frankly, the epiphany of the project. There have been many, but I’d say I I would really highlight this one, which is what does my team do? What is the value of what they generate? And for me, yes, we are, you know, primarily a team that creates documentation, but you know, holy smokes, you know, the, the idea that we are data owners, and we govern a massive, semantically rich, non-determinant, fast-changing data set, that is super, super interesting. Even here inside NetApp, Sarah, we have teams reaching out to us who frankly before probably never thought about the docs. And all of a sudden, because we have this huge data set, they’re like, wow, we can, you know stress test our system or our new technologies using what they have. That’s a super cool moment for our team.

SO: Yeah, I think you’re the first person that I’ve heard describe this sort of context shift from this is content to this is data or this content is also data or however you want to phrase that. But I think that’s a really interesting point and opens up a lot of fascinating possibilities, not least for the English PhDs of the world. That’s super helpful.

AN: Is this where I confessed at one time trying to think I was going to be one of those and I got out because I realized I was terrible at it?

SO: No, no, no, that goes in the non-recorded part of the podcast. Yeah, I’m going to wrap it up there before Adam spills all of the dirt.

AN: Yeah, what am I compensating for, right?

SO: But thank you, because this is really, really interesting. And I think it will be helpful to the people listening to this podcast, because it’s so rare to get that inside view of what it really looks like and what’s really going on inside some of these bigger organizations as you move towards AI, GenAI strategies and figure out how best to leverage that. So thank you, Adam. And it’s great to see you.

AN: No, Sarah, thank you. And actually, I would like to thank my team. I mean, it has been an incredible adventure, and I think the team is really amazing.

SO: Yeah, and I know a few of them and they are great. So with that, thank you for listening to the Content Strategy Experts Podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post Enterprise content operations in action at NetApp (podcast) appeared first on Scriptorium.

  continue reading

186 episodi

Artwork
iconCondividi
 
Manage episode 441408495 series 2320086
Contenuto fornito da Scriptorium - The Content Strategy Experts. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Scriptorium - The Content Strategy Experts o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

Are you looking for real-world examples of enterprise content operations in action? Join Sarah O’Keefe and special guest Adam Newton, Senior Director of Globalization, Product Documentation, & Business Process Automation at NetApp for episode 175 of The Content Strategy Experts podcast. Hear insights from NetApp’s journey to enterprise-level publishing, lessons learned from leading-edge GenAI tool development, and more.

We have writers in our authoring environment who are not writers by nature or bias. They’re subject matter experts. And they’re in our system and generating content. That was about joining us in our environment, reap the benefits of multi-language output, reap the benefits of fast updates, reap the benefits of being able to deliver a web-like experience as opposed to a PDF. But what I think we’ve found now is that this is a data project. This generative AI assistant has changed my thinking about what my team does. Yes, on one level, we have a team of writers devoted to producing the docs. But in another way, you can look at it and say, well, we’re a data engine.

— Adam Newton

Related links:

LinkedIn:

Transcript:

Sarah O’Keefe: Welcome to the content strategy experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage structure, organize and distribute content in an efficient way. In this episode, we talk about content operations with Adam Newton. Adam is the senior director of global content experience services at NetApp. Hi everyone, I’m Sarah O ‘Keefe. Adam, welcome.

Adam Newton: Hey there, how are you doing, Sarah?

SO: It’s good to see and/or hear you.

AN: Good to hear your voice.

SO: Yeah, Adam and I go way back, which you may discover as we go through this podcast. And as those of you that listen to the podcast know, we talk a lot about content ops. So what I wanted to do was bring somebody in that is doing content ops in the real world, as opposed to as a consultant.and ask you, Adam, about your perspective as the director of a pretty good-sized group that’s doing content and content operations and content strategy and all the rest of us. So tell us a little bit about NetApp and your role there.

AN: Sure. So NetApp is a Fortune 500 company. We have probably close to 11,000 or more global employees. Our business is primarily data infrastructure, storage management, both on-prem. We sell storage operating system called ONTAP. We sell hardware storage devices, and we are most importantly, think, at this day and age, integrating with Azure, Google Cloud Platform, and AWS on first -party hyperscaler partnerships. My team at DENAP is… I actually have three teams under me. The largest of those three teams is the technical publications team. The other two teams globalization responsible for localization translation of both collateral and product. And then finally, and most new to my team is our digital content science team, which is our data science wing. Have about 50 to 53, think, employees at this point in my organization and all told probably about a hundred with our vendor partners.

SO: And so I think we all have a decent idea of what the technical publications team and the globalization teams do. Can you talk a little bit about the data science side? What does that team up to?

AN: Yeah, that’s a thank you for asking that question. So about two years ago, I was faced with an opportunity to hire. And maybe some of your listeners who are managers are familiar with that situation, right? I hope they are, rather than not being able to hire. I took a moment and thought a little bit more about what I needed in the future. And I thought a little bit differently about roles and responsibilities, opportunities inside NetApp and the broader content world and decided to bring in a data scientist. And then I thought a little bit more about, well, there are other data scientists at NetApp. Why would I need one? And I thought a little bit about the typical profile of the data scientists at that time at NetApp, mostly in IT and other product teams. Those data scientists were primarily quantitative data scientists coming from computer science backgrounds. And I thought, well, you know, we’re in the content business. I want to find a data scientist who is a content specialist and who has a background in the humanities and who also has skills in core data science skills, emphasizing, for example, NLP. And so that was my quest. And I was very, very fortunate to find a PhD candidate in English who wanted to get out of the academy and who had these skills. And it’s been an incredible boon to our organization. We’ve even hired a second PhD in English recently. And Sarah, since you and I are friends, I’ll say one was from UNC and one was from Duke. Okay. So we don’t have to have that discussion here. I’m an equal opportunity person. Although I did hire the UNC one first, Sarah.

SO: I see, I see. So for those of you that don’t live in North Carolina, this is… I’m not sure there is a comparison, but it is important to have both on your team. And I appreciate your inclusion of everybody. It is kind of like… I’ve got nothing.

AN: Yes.

SO: Okay, so you hired some data scientists from a couple of good universities. Or do they get along? Do they talk to each other?

AN: Fabulously, yes. No petty grievances.

SO: Okay, just checking. All right. So how do you, in this context then, what does your environment look like? What kinds of things are you doing with the docs team? And what’s the news from NetApp docs?

AN: So maybe a little bit of background actually, and you and I have talked about this previously, but we used to be a data shop. And then as things sped up inside our business with the adoption and development of cloud services at NetApp, we found that some of the apparatus of our data infrastructure, our past practices weren’t able to keep up to speed of the cloud services that were being developed. I think this is actually, I’ve talked to other people in our business, this is a very common situation. We handled it in one way. There are many ways to handle it, but the way we chose to handle it was to exit data and to move in our source format anyway to a format called ASCII doc, which I always frequently describe as a dialect of markdown. And we went from being a closed system of technical writers working inside a closed CMS to adopting open source. We now work in GitHub. Our pipeline is all open source and we have now contributors to our content that are not technical writers. In some cases, they’re technical marketing engineers, solution architects, and so forth as well as a pipeline of docs that we build through automations where we, for example, transform API specifications or reference docs that are maintained by developers and output those into our own website docs.netapp.com. In addition to just the docs part, my globalization team has been using for many years, machine translation. So speaking to one particular opportunity of being in one organization, when we output our docs and whenever we update our docs in English, they’re automagically updated in eight other languages and published to docs.netapp.com. So we roughly maintain 150,000 English files and you can times those by eight. Is that right? Did I do the math right? Yeah.

SO: Or nine, depending.

AN: Nine. Yeah. Is English the language? Yeah, sure. Let’s count it.

SO: Depends on how we use it. Okay, so you have an ASCII doc, you know, Markdown-ish. Is it fair to call it Docs as Code environment?

AN: So we often describe it as a content ops, environment. I’m not sure if that is, different from Docs as Code, but I think maybe I will accept that as a reasonable description in the sense that, we have asked our team members to think about the content that they’re writing as highly structured, semantically meaningful units of information. I think in the same way I think a developer can be asked to think of their code being that way and the systems in which we write in VS code, many engineers are writing in that.

SO: Mm-hmm.

AN: And of course our source files, as I mentioned, all in our automation and our pipelines are all based on being in GitHub.

SO: And so then you’ve got docs.netapp.com as a portal or a platform where a lot of this content goes. And what’s happening over there? Do you have any news on new things you’ve done there?

AN: Yeah. I mean, very recently, you know, the timing of this is really interesting. We, have been working on a generative AI solution, for a year, Sarah. you’ll recall the, the hype, right? When, when chat GPT exploded onto the, the, into the public consciousness, right? Through the media and, shortly thereafter, we began imagining what it might look like to leverage that technology, those types of technologies to deliver a different customer experience. And we identified a chatbot as being something we thought could add to the browse and search experiences on docs .netapp .com. And we just released that on the 20th of August announced it here internally inside of NetApp on the 27th. So we are literally like 48, 72 hours into a public adventure here.

SO: I take full credit for planning it, even though I knew nothing about any of this.

AN: Yeah. And that was a long time. I think it’s worth noting too. It was a long time. And I think it’s beyond the full dimensions of this, this discussion to talk about why it took so long. But I will say maybe to, you know, the, were early adopters and we felt, we felt the pain and the benefit of being that, you know, it was like, you know, changing the tires on a, on a race car, right? That was speeding around the track. So we had to learn and be responsive and also humble in the sense that there were some missteps that we had to recover from and some magical thinking, I think, at the beginning of the project that was qualified more over the course of the project.

SO: And so what does that GenAI solution sitting in or over the top of the docs content set, what does that do in terms of your authoring process? Do you have any, are there any changes on the backend as you’re creating this content that is then consumed by the AI?

AN: I would say we’re in the process of understanding the full implications of having this new output surface, this generative AI assistant, and fully grappling with what the implications are for the writers. We find ourselves frequently in discussions about audience. And audience is all those humans that we have been writing for and a whole bunch of machines that we now need to think more consciously about, you know, and it’s, we find ourselves often talking about standards and style, but not just from the perspective of, you know, writing the docs in a consistently patterned way for humans to be able to consume well, but also because patterns and machines are a marriage made in heaven. And we see actually opportunities to begin to think of the content we’re writing as a data set that needs to be more highly patterned and predictable so that a machine can consume it and algorithmically and probabilistically decide how to generate content from the content we’re creating.

SO: And where is this going in terms of what’s next as you’re looking at this? I think you mentioned that there’s other opportunities potentially to add more data slash content.

AN: Yeah, actually, if I back up to a detail and I shared, but maybe quickly, you know, we do have writers in our authoring environment who are not writers. They are by nature and by bias sort of, they’re, people who have their subject matter experts, right? And they’re in our system and they’re generating content. But I think that some of the opportunities that, so that was about join us in our environment, right? Join us in our environment, reap the benefits of multi-language output, reap the benefits of fast updates, reap the benefits of being able to deliver a web-like experience as opposed to a PDF. But what I think we’ve found now is that this is a data project. This generative AI assistant has changed my thinking about what my team does. And I think, yes, on one level, true. Yes, we have a team of writers and there’s a big factory devoted to producing the docs. But in another way, you can look at it and say, well, we’re a data engine. We own a large, own, maintain a large data set and the GenAI is one consumer of that data set. But we’re also thinking about our data set as being joinable to other data sets inside of NetApp. And in particular, I work inside the chief design office at NetApp, along with UX researchers and designers. And we’re also more broadly part of our platform team at NetApp, shared platform team. So we’re thinking about how might we join our data with other teams’ data to create in-product experiences that are data-led or data-driven in combination with curated experience. So if your viewers were to be able to see me, I am waving my hand a little bit, not because I’m dissembling, but more because I’m aspiring. And I think there’s a really, really cool future ahead for, a way, Sarah, that I think is super energizing for the writers, right? To see that their work is being reframed, not replaced or changed, right? The fear of writers with GenAI, right, of being replaced. Well, I would offer this as an example of, you know, maybe it’s not such a dismal view and maybe in fact there’s a very interesting future if you reframe your thinking about what you do and the opportunities to join what you do to create different experiences.

SO: And I think it’s an interesting perspective to look at GenAI as being a consumer of the content slash data that you’re putting out. A lot of the initial stuff was, this is great. GenAI will just replace all the tech writers. You’re talking about something entirely different.

AN: I guess I wanted to expand on that because I think we’re actually now hovering on a really important point. You know, what is your mindset? You know, what what how are you thinking about this moment in time? The broad we write you or the broader you us generally write who are in this industry. And, you know, I think we don’t see a great indication that GenAI can create net new content and do it well, honestly. I think you can write it summarizing, it can make your day-to-day, your meeting notes and so forth, Microsoft Co-pilot, right? There are some great uses, but I have not seen convincing, compelling indicators that docs can be written by, at least at the enterprise level, right? Our products are complex. We often talk about our writers as sense makers, right? And I think that we can take advantage of GenAI in the right ways. And I think this is one of the ways that we’re taking advantage of it, which is to give customers another experience. And frankly, also for us to learn a lot about what people are asking and assuming and we can learn a lot and continuously improve.

SO: So what’s happening on the delivery side? Somebody asks for some sort of information and it gives either, it says it doesn’t exist or it gives an incorrect response. Are you seeing any patterns there? What are you doing with that?

AN: Yeah, many of your listeners might have produced products themselves, right, or delivered products themselves and remembered what happens in the first day or two of releasing a product, right? So the timing of this chat is really good. Yeah, in the last couple days we’ve seen I was just talking to a data scientist on my team and I was saying, you know, what I think I see here emerging as a possible pattern is that people don’t actually know how to use these things effectively. That, you know, they ask of it questions that it really could never answer, or they don’t fully understand the constraints of the system, meaning that, well, it’s only based on a certain data set. you know, they don’t know that the data set doesn’t include the data they’re looking for, right? Because it sits somewhere else. You know, we’re modifying our processes to intake feedback. I think there’s a real interesting nexus is, is it the AI or is it the content? That’s the really interesting one, right? You know, was the content ambiguous, deficient, duplicitous, whatever, you know, is that a word?

SO: It is now.

AN: At UNC we use that word, not at Duke. But it is an interesting discussion inside our organization when we receive a piece of feedback, what’s causing it? Is it the interpretive engine or is it our source? And so we’re seeing a lot of gaps in our content, it’s exposing a lot of gaps or other suboptimal implementations.

SO: I mean, we’ve said that in a sort of glib manner, because of course you’re living this day to day and hour by hour, but we’ve said that, know, GenAI sitting over the top of a content set is going to uncover all your inconsistencies, all your missing pieces, all your, you know, over here you said update and over here you said upgrade. That was an example I heard from someone else. And so it basically uncovers your technical debt.

AN: Yeah, beautiful. Yeah, bingo. Yeah. Yeah. Yeah. You’re so right there. Terminology, right? my God. Can you believe how many things, how many ways we’ve talked to, talked about X, right?

SO: Right, and the GenAI thinks they’re different because, or it doesn’t think anything right, but the pattern isn’t there and so it doesn’t associate those things necessarily.

AN: Yeah, your listeners may commiserate with this, or the use of words as verbs and nouns, like cable. We often in our documentation talk about cabling devices. How would a GenAI know that the writer of the question is using cable as a verb or noun?

SO: Mm-hmm. So as you’re working through this and with your, you know, it sounds like two days of go live plus a year or two or three of suffering and a year and two days.

AN: Well, a year and two days, a year and two days.

SO: You know, I think you’re further along than lot of other organizations. Do you have any advice for those that are just beginning this journey and just looking at these kinds of issues? What are the things you did best or maybe worst or would do the same way or not? What’s out there that you can tell people that’ll maybe keep them from, you know, get them, get them or help them as they move forward?

AN: Yeah, but maybe think of it in the old people process systems dimensions. Actually, taking that latter one, systems, I would say beware the fascination of the system without thinking more about the processes and people that are going to be involved in the creation of some kind of generative AI solution. I think, you know, this is as much of an adaptive people process as it is a problem as it is a technical problem. Probably more frankly on the adaptive. And from a process perspective, I’d say, be curious about what you learn. Be attentive to the specifics, but look for the broad patterns in the feedback or what you’re seeing as you develop these solutions, you know, for me, I think I hinted at this before and I think it for me has been frankly, the epiphany of the project. There have been many, but I’d say I I would really highlight this one, which is what does my team do? What is the value of what they generate? And for me, yes, we are, you know, primarily a team that creates documentation, but you know, holy smokes, you know, the, the idea that we are data owners, and we govern a massive, semantically rich, non-determinant, fast-changing data set, that is super, super interesting. Even here inside NetApp, Sarah, we have teams reaching out to us who frankly before probably never thought about the docs. And all of a sudden, because we have this huge data set, they’re like, wow, we can, you know stress test our system or our new technologies using what they have. That’s a super cool moment for our team.

SO: Yeah, I think you’re the first person that I’ve heard describe this sort of context shift from this is content to this is data or this content is also data or however you want to phrase that. But I think that’s a really interesting point and opens up a lot of fascinating possibilities, not least for the English PhDs of the world. That’s super helpful.

AN: Is this where I confessed at one time trying to think I was going to be one of those and I got out because I realized I was terrible at it?

SO: No, no, no, that goes in the non-recorded part of the podcast. Yeah, I’m going to wrap it up there before Adam spills all of the dirt.

AN: Yeah, what am I compensating for, right?

SO: But thank you, because this is really, really interesting. And I think it will be helpful to the people listening to this podcast, because it’s so rare to get that inside view of what it really looks like and what’s really going on inside some of these bigger organizations as you move towards AI, GenAI strategies and figure out how best to leverage that. So thank you, Adam. And it’s great to see you.

AN: No, Sarah, thank you. And actually, I would like to thank my team. I mean, it has been an incredible adventure, and I think the team is really amazing.

SO: Yeah, and I know a few of them and they are great. So with that, thank you for listening to the Content Strategy Experts Podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post Enterprise content operations in action at NetApp (podcast) appeared first on Scriptorium.

  continue reading

186 episodi

Tutti gli episodi

×
 
Loading …

Benvenuto su Player FM!

Player FM ricerca sul web podcast di alta qualità che tu possa goderti adesso. È la migliore app di podcast e funziona su Android, iPhone e web. Registrati per sincronizzare le iscrizioni su tutti i tuoi dispositivi.

 

Guida rapida