Even though he lived 2,000 years ago, you’ve probably heard of the Chinese military strategist and general Sun Tzu. He’s known for a lot of things, but these days he’s best known for his work The Art of War[1], which captures military wisdom that is still studied and applied today
Even though Sun Tzu didn’t write about building a data culture[2], there’s still a lot we can learn from his writings. Perhaps the most relevant advice is this:
Building a data culture is hard. Keeping it going, and thriving, as the world and the organization change around you is harder. Perhaps the single most important thing[3] you can do to ensure long-term success is to define the strategic goals for your efforts.
Rather than doing all the other important and valuable tactical things, pause and think about why you’re doing them, and where you want to be once they’re done. This strategic reflection will prove invaluable, as it will help you prioritize, scope, and tune those tactical efforts.
Having a shared strategic vision makes everything else easier. At every step of the journey, any contributor can evaluate their actions against that strategic vision. When conflicts arise – as they inevitably will – your pre-defined strategic north star can help resolve them and to keep your efforts on track.
[3] I say “perhaps” because having an engaged executive sponsor is the other side of the strategy coin. Your executive sponsor will play a major role in defining your strategy, and in getting all necessary stakeholders on board with the strategy. Although I didn’t plan it this way, I’m quite pleased with the parallelism of having executive sponsorship be the first non-introductory video in the series, and having this one be the last non-summary video. It feels neat, and right, and satisfying.
This morning I presented a new webinar for the Istanbul Power BI user group, covering one of my favorite subjects: common patterns for successfully using and adopting dataflows in Power BI.
This session represents an intersection of my data culture series in that it presents lessons learned from successful enterprise customers, and my dataflows series in that… in that it’s about dataflows. I probably didn’t need to point out that part.
The session recording is available for on-demand viewing. The presentation is around 50 minutes, with about 30 minutes of dataflows-centric Q&A at the end. Please check it out, and share it with your friends!
In an ideal world, everyone knows where to find the resources and tools they need to be successful.
We don’t live in that world.
I’m not even sure we can see that world from here. But if we could see it, we’d be seeing it through a portal[1].
One of the most common themes from my conversations with enterprise Power BI customers is that organizations that are successfully building and growing their data cultures have implemented portals where they share the resources, tools, and information that their users need. These mature companies also treat their portal as a true priority – the portal is a key part of their strategy, not an afterthought.
This is why:
In every organization of non-trivial size there are obstacles that keep people from finding and using the resources, information, and data they need.
Much of the time people don’t know what they need, nor do they know what’s available. They don’t know what questions to ask[2], much less know where to go to get the answers. This isn’t their fault – it’s a natural consequence of working in a complex environment that changes over time on many different dimensions.
As I try to do in these accompanying-the-video blog posts I will let the video speak for itself, but there are a few key points I want to emphasize here as well.
You need a place where people can go for all of the resources created and curated by your center of excellence
You need to engage with your community of practice to ensure that you’re providing the resources they need, and not just the resources you think they need
You need to keep directing users to the portal, again and again and again, until it becomes habit and they start to refer their peers
The last point is worth emphasizing and explaining. If community members don’t use the portal, it won’t do what you need it to do, and you won’t get the return you need on your investments.
Users will continue to use traditional “known good” channels to get information – such as sending you an email or IM – if you let them. You need to not let them.
[1] See what I did there?
[2] Even though they will often argue vehemently against this fact.
One aspect of building a data culture is selecting the right tools for the job. If you want more people working with more data, giving the tools they need to do that work is an obvious[1] requirement. But how many tools do you need, and which tools are the right tools?
It should be equally obvious that the answer is “it depends.” This is the answer to practically every interesting question. The right tools for an organization depend on the data sources it uses, the people who work with that data, the history that has gotten the organization to the current decision point, and the goals the organization needs to achieve or enable with the tools it selects.
With that said, it’s increasingly common[2] to see large organizations actively working to reduce the number of BI tools they support[3]. The reasons for this move to standardization are often the same:
Reduce licensing costs
Reduce support costs
Reduce training costs
Reduce friction involved in driving the behaviors needed to build and grow a data culture
Other than reducing the licensing costs[4], most of these motivations revolve around simplification. Having fewer tools means learning and using fewer tools. It means everyone learning and using fewer tools, which often results in less time and money spent to get more value from the use of those tools.
One of the challenges in eliminating a BI tool is ensuring that the purpose that tool fulfilled is now effectively fulfilled by the tool that replaces it. This is where migration comes in.
This documentation was written by the inestimable Melissa Coates of Coates Data Strategies, with input and technical review by the Power BI customer advisory team. If you’re preparing to retire another BI tool and move its workload to Power BI – or if you’re wondering where to start – I can’t recommend it highly enough.
[1] If this isn’t obvious to a given organization or individual, I’m reasonably confident that they’re not actively trying to build a data culture, and not reading this blog.
[2] I’m not a market analyst but I do get to talk to BI, data, and analytics leaders at large companies around the world, and I suspect that my sample size is large and diverse enough to be meaningful.
[3] I’m using the word “support” here – and not “use” – deliberately. It’s also quite common to see companies remove internal IT support from deprecated BI tools, but also let individual business units continue to use them – but also to pay for the tools and support out of their own budgets. This is typically a way to allow reluctant “laggard” internal customer groups to align with the strategic direction, but to do it on their own schedules.
[4] I’m pretty consistent in saying I don’t know anything about licensing, but even I understand that paying for two things costs more than paying for one of those things.
In addition to collaboration and partnership between business and IT, successful data cultures have something else in common: they recognize the need for both discipline and flexibility, and have clear, consistent criteria and responsibilities that let all stakeholders know what controls apply to what data and applications.
Today’s video looks at this key fact, and emphasizes this important point: you need to pick your battles[1].
If you try to lock everything down and manage all data and applications rigorously, business users who need more agility will not be able to do their jobs – or more likely they will simply work around your controls. This approach puts you back into the bad old days before there were robust and flexible self-service BI tools – you don’t want this.
If you try to let every user do whatever they want with any data, you’ll quickly find yourself in the “wild west” days – you don’t want that either.
Instead, work with your executive sponsor and key stakeholders from business and IT to understand what requires discipline and control, and what supports flexibility and agility.
One approach will never work for all data – don’t try to make it fit.
[1] The original title of this post and video was “discipline and flexibility” but when the phrase “pick your battles” came out unscripted[2] as I was recording the video, I realized that no other title would be so on-brand for me. And here we are.
[2] In case you were wondering, it’s all unscripted. Every time I edit and watch a recording, I’m surprised. True story.
My recent post on metadata and Indian food has gotten more traffic than most posts on this blog, and it has also received more comments and discussion as well. One comment in particular, from Jessica Jolly, really resonated with me:
Love the analogy. My personal favorite analogy regards old family photos. If no one takes the time to write on the back of the photo the who/what/where/when/why (i.e. the metadata), that photo will get thrown away.
Not everyone agreed that this was a great analogy. Khürt Williams in particular called out the inherent value of some data independent of any metadata to give it context.
No one throws away old family photos because they lack who/what/where/when/why. In fact, I would argue that with family photos the metadata lives in the minds of the people in the photograph or some family member you haven’t yet spoken to.
Somethings have value way beyond their metadata.
These comments got me thinking, and made me ask myself: when does memory die?
I’ve seen many variations on this quote[1], but I don’t know who said it first:
You only live as long as the last person who remembers you.
This may be a Russian proverb or it may be a quote from Westworld, but I believe the principle applies as much to business data as it does to family photos, despite the obvious differences between the two.
Looking at the family photos context first I can clearly recall times in my life, in those dark days following a funeral or a divorce, when family photos were discarded and the lack of metadata was a contributing factor. The photos of close relatives were kept, but those of more distant relatives were at risk.
When you’re asking “should I keep this photo?” and the next question is “who are these people?” the answer to the second question is going to influence the answer to the first.
As a specific example, I’d like to share a photo that hangs above the one-handed swords[2] in my hallway.
I don’t know who this is.
This photo was in the home of my wife’s grandmother, who passed away almost 20 years ago. We found it when we were cleaning out her house after her funeral; it was in the attic, not on display, and no one knew who this young man might be. A few relatives thought that he was a cousin or second cousin of my wife’s late grandmother who went to the great war and never returned – but no one was certain. There was writing on the paper backing the frame, but it was faded and smudged by the years, and by the time we discovered the photo the words were illegible.
By the time we discovered the data, the metadata was no longer usable, and any subject matter expert who could have shared the deeper context of the data had long since moved on.
And once you phrase it like that, it starts to sound familiar again.
In far too many business contexts the metadata lives only in the minds of the people who create and work with the data. It’s tribal knowledge – just like unlabeled family photographs. But as people move on to new jobs and the business changes over time, that tribal knowledge is lost. Even though the data may still be the same, and may still be valuable, when the people move on the tribal knowledge leaves with them. At this point it will either be organically rediscovered and recreated, or the data will stagnate because no one remembers anymore why it was important. Or, as is the case with the photo above, the data may be used and applied to a different purpose.
Tribal knowledge is a lousy metadata solution, no matter the context. Because tribal knowledge is inherently transitory and lossy, we should strive to capture metadata in a more systematic way, and to keep the metadata as close to the data as possible.
Because eventually memory will die. And some things are too important to forget.
[1] My favorite variation may be from Manowar, who remind us that only courage and heroism linger after death… but it would be a stretch even for me to incorporate this into the body of the post. This is why we have footnotes.
[2] I call out the fact that these are the one-handed swords because the two-handed swords hang in a different hallway, and there isn’t enough room for a photo above them.
I love to cook, and over the past few days I’ve made a few of my favorite Indian recipes that I haven’t made in a while. So, of course, this has me thinking about metadata.
Homemade naan and chicken tikka masala
Going from Indian cooking to metadata isn’t as big a leap as you might think. The bridge is one of my favorite cookbooks: Julie Sahni’s Classic Indian Vegetarian and Grain Cooking.
If you’re just here for the food, you should immediately make this spectacularly delicious Bengal red lentil recipe taken from this book, because it is absolutely phenomenal. If you’re here for the metadata, remember the link but don’t click on it yet.
Every recipe I’ve made from this cookbook has produced fantasticresults. It’s one of those go-to cookbooks where I know that anything I try will be good. And yet, I almost never seek it out when I want to cook, except for the recipes I already know. The reason is metadata.
It doesn’t matter how good your data is – without effective and available metadata, your investment in quality data will be undermined.
Let’s look at the recipe for saag paneer. Say those words out loud (“saag paneer”) and images of that rich, vibrant green sauce will start running through your mind.
I found this recipe easily because I have a bookmark. But let’s say I didn’t – it should still be easy to find, because cookbooks have indexes, and indexes are the perfect tool for finding recipes. Let’s find the recipe for saag paneer.
Oh, there’s no entry for saag paneer, or for saag?There’s no entry for paneer? There are so many paneer recipes in this book!Ok, we’re making progress… I think.There it is. Maybe? It still doesn’t say saag paneer anywhere.
Literally the only place the phrase “saag paneer” exists in this book is below the recipe header. This means that the only way to find the saag paneer recipe is to flip through the book page by page, or to know the specific and arbitrary phrase the author uses to describe the recipe for Western readers. This is why my copy of the book looks like this[1]:
This systemic problem is exacerbated by the book’s complete lack of photos; there’s also no way to skim through the book and quickly visually identify recipes of relevant interest. The reader is forced to carefully evaluate each recipe in turn, looking at ingredients and processes to decide if the recipe is worth making.
At this point you may be asking what this has to do with metadata[2] or you may see the connection already.
The reason I immediately thought of metadata may be related to a BI effort I’m working on. Without going into too much detail, I have built a small Power BI app that presents information from a program I run and makes that information available to other members of my extended team.
I’m currently at the point where my app needs to include data from other sources in order to increase its value. Fortunately, that data already exists, and to make it even easier to work with, it is available as a set of Power BI dataflows. I was able to email the owner to get access[3] and to learn which dataflows to look in, and I was off. But not for very far, or for very long.
Very quickly I was back where this post started: I was faced with the high-quality data I needed, and I lacked the metadata to efficiently use it. I needed to manually evaluate each dataflow and each entity to understand its contents and context and to decide if it was right for me. I made some early progress, but because of the lack of metadata the effort will likely take days not hours, and this means it probably won’t get done this month or next.
Let that sink in: because of a lack of effective metadata, quality curated data is going unused, and business insights are being delayed by weeks or months[4].
Just like these fantastic recipes sitting on my shelf, largely unused and unmade because a fantastic cookbook lacks a usable index, these fantastic dataflows are going largely unused, at least by me. All because metadata was treated as a “nice to have” rather than as a fundamental high-priority requirement.
Does your data have the metadata it needs, in a format and location that serves the needs of your users? How do you know? Remember that last picture of all the bookmarks[5]?
These bookmarks are a symptom of the underlying metadata problem. Bookmarks aren’t a problem themselves, but if you’re paying attention you can see that they’ve been implemented as a workaround to a problem that might not otherwise be apparent. If you’re familiar with the concept of “code smells”, you probably see where I’m going.
When your data lacks useful metadata to enable its effective use, people will start to take actions because of this lack. Things like emailing you to ask questions. Things like building their own ad hoc data dictionaries. Things like using alternate or derivative sources instead of using your authoritative data source – like the recipe link I shared above.
The more of these actions you identify, the more urgency you should feel about closing the metadata gap. Not every data source is a werewolf, but every data source requires metadata to be effectively and efficiently used.
Requires.
[1] Remember this picture. There will be a quiz later.
[2] You may also be asking if there’s anything in life that doesn’t make me think about metadata. This is a fair question.
[3] I knew the owner’s email because I had bookmarked it earlier.
[4] To be fair, my full schedule is also contributing to this delay – I’m not trying to say that the lack of metadata is independently costing months. But it is a key factor: my schedule could accommodate two or three hours for this work, but it doesn’t have room for two or three days until the end of April.
The last post was about the dangers inherent in measuring the wrong thing – choosing a metric that doesn’t truly represent the business outcome[1] you think it does. This post is about different problems – the problems that come up when you don’t truly know the ins and outs of the the data itself… but you think you do.
This is another “inspired by Twitter” post – it is specifically inspired by this tweet (and corresponding blog post) from Caitlin Hudon[2]. It’s worth reading her blog post before continuing with this one – you go do that now, and I’ll wait.
The scariest ghost stories I know take place when the history of data — how it’s collected, how it’s used, and what it’s meant to represent — becomes an oral one, passed down like campfire stories from one generation of analysts to another. 👻https://t.co/nTQNSmk3oD
Caitlin’s ghost story reminded me of a scary story of my own, back from the days before I specialized in data and BI. Back in the days when I was a werewolf hunter. True story.
Around 15 years ago I was a consultant, working on a project with a company that made point-of-sale hardware and software for the food service industry. I was helping them build a hosted solution for above-store reporting, so customers who had 20 Burger Hut or 100 McTaco restaurants[3] could get insights and analytics from all of them, all in one place. This sounds pretty simple in 2020, but in 2005 it was an exciting first-to-market offering, and a lot of the underlying platform technologies that we can take for granted today simply didn’t exist. In the end, we built a data movement service that took files produced by the in-store back-of-house system and uploaded them over a shared dial-up connection[4] from each restaurant to the data center where they could get processed and warehoused.
The analytics system supported a range of different POS systems, each of which produced files in different formats. This was a fun technical challenge for the team, but it was a challenge we expected. What we didn’t expect was the undocumented failure behavior of one of these systems. Without going into too much detail, this POS system would occasionally produce output files that were incomplete, but which did not indicate failure or violate any documented success criteria.
To make a long story short[5], because we learned about the complexities of this system very late in the game, we had some very unhappy customers and some very long nights. During a retrospective we engaged with of the project sponsors for the analytics solution because he had – years earlier – worked with the development group that built this POS system. (For the purposes of this story I will call the system “Steve” because I need a proper noun for his quote.)
The project sponsor reviewed all we’d done from a reliability perspective – all the validation, all the error handling, all the logging. He looked at this, then he looked at the project team and he said:
You guys planned for wolves. ‘Steve’ is werewolves.
Even after all these years, I still remember the deadpan delivery for this line. And it was so true.
We’d gone in thinking we were prepared for all of the usual problems – and we were. But we weren’t prepared for the horrifying reality of the data problems that were lying in wait. We weren’t prepared for werewolves.
Digging through my email from those days, I found a document I’d sent to this project sponsor, planning for some follow-up efforts, and was reminded that for the rest of the projects I did for this client, “werewolves” became part of the team vocabulary.
What’s the moral of this story? Back in 2008 I thought the moral was to test early and often. Although this is still true, I now believe that what Past Matthew really needed was a data catalog or data dictionary with information that clearly said DANGER: WEREWOLVES in big red letters.
This line from Caitlin’s blog post could not be more wise, or more true:
The best defense I’ve found against relying on an oral history is creating a written one.
The thing that ended up saving us back in 2005 was knowing someone who knew something – we happened to have a project stakeholder who had insider knowledge about a key data source and its undocumented behavior. What could have better? Some actual <<expletive>> documentation.
Even in 2020, and even in mature enterprise organizations, having a reliable data catalog or data dictionary that is available to the people who could get value from it is still the exception, not the rule. Business-critical data sources and processes rely on tribal knowledge, time after time and team after team.
I won’t try to supplement or repeat the best practices in Caitlin’s post – they’re all important and they’re all good and I could not agree more with her guidance. (If you skipped reading her post earlier, this is the perfect time for you to go read it.) I will, however, supplement her wisdom with one of my favorite posts from the Informatica blog, from back in 2017.
I’m sharing this second link because some people will read Caitlin’s story and dismiss it because she talks about using Google Sheets. Some people will say “that’s not an enterprise data catalog.” Don’t be those people.
Regardless of the tools you’re using, and regardless of the scope of the data you’re documenting, some things remain universally true:
Tribal knowledge can’t be relied upon at any meaningful scale or across any meaningful timeline
Not all data is created equal – catalog and document the important things first, and don’t try to boil the ocean
The catalog needs to be known by and accessible to the people who need to use the data it described
Someone needs to own the catalog and keep it current – if its content is outdated or inaccurate, people won’t trust it, and if they don’t trust it they won’t use it
Sooner or later you’ll run into werewolves of your own, and unless you’re prepared in advance the werewolves will eat you
When I started to share this story I figured I would find a place to fit in a “unless you’re careful, your data will turn into a house when the moon is full” joke without forcing it too much, but sadly this was not the case. Still – who doesn’t love a good data werehouse joke?[6]
Maybe next time…
[1] Or whatever it is you’re tracking. You do you.
[2] Apparently I started this post last Halloween. Have I mentioned that the past months have been busy?
[3] Or Pizza Bell… you get the idea.
[4] Each restaurant typically had a single “data” phone line that used the same modem for processing credit card transactions. I swear I’m not making this up.
[5] Or at least short-ish. Brevity is not my forte.
I live 2.6 miles (4.2 km) from the epicenter of the coronavirus outbreak in Washington state. You know, the nursing home that’s been in the news, where over 10 people have died, and dozens more are infected.[1]
As you can imagine, this has started me thinking about self-service BI.
Where can I find information I can trust?[2]When the news started to come out covering the US outbreak, there was something I immediately noticed: authoritative information was very difficult to find. Here’s a quote from that last link.
This escalation “raises our level of concern about the immediate threat of COVID-19 for certain communities,” Dr. Nancy Messonnier, director of the CDC’s National Center for Immunization and Respiratory Diseases, said in the briefing. Still, the risk to the general public not in these areas is considered to be low, she said.
That’s great, but what about the general public in these areas?
What about me and my family?
When most of what I saw on Twitter was people making jokes about Jira tickets[3], I was trying to figure out what was going on, and what I needed to do. What actions should I take to stay safe? What actions were unnecessary or unhelpful?
Before I could answer these questions, I needed to find sources of information. This was surprisingly difficult.
Specifically, I needed to find sources of information that I could trust. There was already a surge in misinformation, some of it presumably well-intentioned, and some from deliberately malicious actors. I needed to explore, validate, confirm, cross-check, act, and repeat. And I was doing this while everyone around me seemed to be treating the emerging pandemic as a joke or a curiosity.
I did this work and made my decisions because I was a highly-motivated stakeholder, while others in otherwise similar positions were farther away from the problem, and were naturally less motivated at the time.[4]
And this is what got me thinking about self-service BI.
In many organizations, self-service BI tools like Power BI will spread virally. A highly-motivated business user will find a tool, find some data, explore, iterate, refine, and repeat. They will work with untrusted – and sometimes untrustworthy – data sources to find the information they need to use, and to make the decisions they need to make. And they do it before people in similar positions are motivated enough to act.
But before long, scraping together whatever data is available isn’t enough anymore. As the number of users relying on the insights being produced increases – even if the insights are being produced by a self-service BI solution – the need for trusted data increases as well.
Where an individual might successfully use disparate unmanaged sources successfully, a population needs a trusted source of truth.
At some point a central authority needs to step up, to make available the data that can serve as that single source of truth. This is easier said than done[5], but it must be done. And this isn’t even the hard part.
The hard part is getting everyone to stop using the unofficial and untrusted sources that they’ve been using to make decisions, and to use the trusted source instead. This is difficult because these users are invested in their current sources, and believe that they are good enough. They may not be ideal, but they work, right? They got me this far, so why should I have to stop using them just because someone says so?
This brings me back to those malicious actors mentioned earlier. Why would someone deliberately share false information about public health issues when lies could potentially cost people their lives? They would do it when the lies would help forward an agenda they value more than they value other people’s lives.
In most business situations, lives aren’t at stake, but people still have their own agendas. I’ve often seen situations where the lack of a single source of truth allows stakeholders to present their own numbers, skewed to make their efforts look more successful than they actually are. Some people don’t want to have to rebuild their reports – but some people want to use falsified numbers so they can get a promotion, or a bonus, or a raise.
Regardless of the reason for using untrusted sources, their use is damaging and should be reduced and eliminated. This is true of business data and analytics, and it is true of the current global health crisis. In both arenas, let’s all be part of the solution, not part of the problem.
[1] Before you ask, yes, my family and I are healthy and well. I’ve been working from home for over a week now, which is a nice silver lining; I have a small but comfortable home office, and can avoid the obnoxious Seattle-area commute.
[2] This article is the best single source I know of. It’s not authoritative source for the subject, but it is aggregating and citing authoritative sources and presenting their information in a form closer to the solution domain than to the problem domain.
[3] This is why I’ve been practicing social media distancing.
[4] This is the where the “personal pandemic parable” part of the blog post ends. From here on it’s all about SSBI. If you’re actually curious, I erred on the side of caution and started working from home and avoiding crowds before it was recommended or mandated. I still don’t know if all of the actions I’ve taken were necessary, but I’m glad I took them and I hope you all stay safe as well.
[5] As anyone who has ever implemented a single source of truth for any non-trivial data domain can attest.
[6] You can enjoy the lyrics even if Kreator’s awesome music isn’t to your taste.
I’m running behind on my own YouTube publishing duties[1], but that doesn’t keep me from watching[2] the occasional data culture YouTube video produced by others.
Like this one:
Ok… you may be confused. You may believe this video is not actually about data culture. This is an easy mistake to make, and you can be forgiven for making it, but the content of the video make its true subject very clear:
A new technology is introduced that changes the way people work and live. This new technology replaces existing and established technologies; it lets people do what they used to do in a new way – easier, faster, and further. It also lets people do things they couldn’t do before, and opens up new horizons of possibility.
The technology also brings risk and challenge. Some of this is because of the new capabilities, and some is because of the collision[3] between the new way and the old way of doing things. The old way and the new way aren’t completely compatible, but they use shared resources and sometimes things go wrong.
At the root of these challenges is users moving faster than any relevant authorities. Increasing numbers of people are seeing the value of the new technology, assuming the inherent risk[4], and embracing its capabilities while hoping for the best.
Different groups see the rising costs and devise solutions for these challenges. Some solutions are tactical, some are strategic. And eventually some champions emerge to push for the creation of standard solutions. Or standards plural, because there always seems to be more than one of those darned things.
Not everyone buys into the standards at first, but over time the standards are refined and… actually standardized.
This process doesn’t slow down the technology adoption. The process and the standards instead provide the necessary shape and structure for adoption to take place as safely as possible.
With the passage of time, users take for granted the safety standards as much as they take for granted the capabilities of the technology… and can’t imagine using one without the other.
For the life of me I can’t imagine why they kept doubling down on the “lane markings” analogy, but I’m actually happy they did. This approach may get more people paying attention – I can’t find any other data culture videos on YouTube with 488K views…
[1] Part of this is because my wife has been out of town, and my increased parental responsibilities have reduced the free time I would normally spend filming and editing… but it’s mainly because I’m finding that talking coherently about data culture is harder for me than writing about data culture. I’ll get better, I assume. I hope.
[2] In this case, I watched while I was folding laundry. As one does.
[3] Yes, pun intended. No, I’m not sorry.
[4] Either through knowledge or through ignorance.