On Death and Immortality

At the gym today, I listened to a Very Bad Wizards podcast about the fear of death, and about what it would be like to live forever, and I thought I’d share some thoughts. It’s a podcast that I absolutely love, with philosopher Tamler Sommers and psychologist David Pizarro, although sometimes you’d think their educations were reversed.

I’m going to present a few hypothetical lifespan cases, and I’m curious to hear which one you would choose and why:

  • Live forever
  • Drink a magical elixir that keeps you alive every day, but if you decide not to drink it you die
  • Live 300 years, assuming you age proportional to your years, so a 299 year old would look like a 90 year old now
  • Live a normal lifespan and don’t know when you’re going to die, but assume around 75-90

In all of these hypothetical cases there’s a huge dependence on how much satisfaction you derive from living. If you don’t feel deeply satisfied with your life, and don’t believe you could become that way, then there’s no reason to think about immortality because it would be a (figurative) death sentence. This blog post isn’t meant to be about achieving satisfaction in life though, so I’ll leave you that and talk about immortality today. So, given the above options, which would you choose?

To me, living forever has some obvious problems. First of all, you would see family members and friends die time and time again. The people you become most attached to would leave, and the only way to not live in a miserable cycle of death would be to detach yourself from those people or detach yourself from the emotions that make you feel close to them. Eventually, you’d need to find fulfillment in other walks of life outside of personal connections to family and friends, and I could see that leading to a depressing and empty existence. You also run the risk of not needing to push yourself to do, well, anything. If you have forever to do it, why bother doing it now? Part of the satisfaction we get from accomplishments involves the timeliness of them, and I could see that slipping away as time becomes a non-issue for your immortal brain. Lastly (though there are probably a ton of others), you would simply get bored. I think that we try to pack in as much as possible in our short time on this earth, and that short time span drives us to become the best we can be given this restraint. Part of what fuels that exploration, curiosity and growth is the fact that we simply can’t get everything done in time, but we should certainly try.

How about drinking an elixir to keep you alive? Also what would it taste like? Would you go mad drinking the same flavour elixir every day? Is it purple? I imagine it’s purple. Anyway this option allows you to just end it when you get bored with seeing the same stuff over and over and over. Seems simple enough, and seems like a good choice. As they mention in the podcast, vampires will often retreat to their coffin for a couple hundred years just because they get bored of the scenery and want to wake up to a new world every once in a while. The hibernation option wasn’t presented here, but this choice would allow you to give up once you got what you needed from life. But the same problems about a lack of time pressure exist as in the infinite case, and I could easily see someone devolving into a boring blob of sensation-seeking meat with bigger things on the ever-retreating horizon, never actually reaching those things because they’ve got the time to do them later.

A 300 year life seems like it might not be so bad, if we ignore of course the economic, political and ecological issues that would come along with the first generation of 300-year-olds. For me, it would allow me time to learn more things, explore more of the world, learn to play cello (though I plan to do that in this short lifetime anyway!) and see more technology change throughout my life… sign me up! There are likely some issues I haven’t though of yet, so maybe just put me on the wait list for now.

The last option is the normal life-span life, which I guess we’re stuck with for now. Might as well enjoy it! …And still try to learn cello.

Here’s the link to the podcast, in case you want to explore in more depth. Just a disclaimer, in the first segment they remake classic thought experiments in philosophy using porn references and it’s absolutely hilarious.

We Don’t Need To (And Can’t) Know Everything

Socrates said “The only true wisdom is in knowing you know nothing.”

There isn’t necessarily a universal truth for all problems. People have tried to find the “right” way to run a country or a nation, and we’ve come up with democracy, capitalism, communism, fascism, dictatorships, and more. Every time people thought they had arrived at the objective truth and “proper way”, it turned out to fail the test of time. The only objectively true conclusion is that a lot of problems don’t have objectively true solutions. Often the solutions are positive or negative based on your perspective, and this relativism is important to keep in mind. This post is inspired by a Very Bad Wizards podcast where they talk about Jorge Luis Borges’ short story Tlön, Uqbar, Orbis Tertius.

This does not mean that ALL issues are relative however, as some people seem to claim. Most of science provides good examples for this, while we don’t necessarily know the truth about certain phenomena, we can say with a high level of certainty that the truth does exist, and it’s the same no matter how you look at it. This claim that all issues are relative starts to show when people lose faith in the systems which have failed them; they may start to lose faith in ALL systems. Some systems however, while not necessarily objectively the “best”, can still be good and can be improved upon slowly.

We don’t need to know everything. We don’t even need to strive for that. As Socrates said (or supposedly said), wisdom is realizing that we don’t know everything. Knowledge, on the other hand, speaks to objective truths that we do “know”.

We can focus on accepting that we have limited perspective and we’ll always be influenced by biases and our point of view.

If we DON’T do this, we start to think that the things we do know objectively can apply to other things that cannot be known objectively. The example used in the podcast is a Silicon Valley tech person who’s really good at programming and then learns some psychology, and thinks that they can create happiness with their app. (Seems like this has already happened a few thousand times…)

We have to realize our limitations when it comes to knowledge and be wise about how we use and share the knowledge that we do have. If we want to openly discuss, think, or progress as a society, we need to be aware of our perspective and aware of the problem that we’re trying to solve, especially if there is no correct solution.

Appealing to Commonalities

I recently listened to an amazing podcast with Joe Rogan and Jonathan Heidt which I highly recommend checking out. One of the topics that came up was how to fight the current sociopolitical climate and how to make real change, or rather how to get back on track to diminishing racism, sexism, and other forms of judgment based on people’s appearances.

As they mention in the podcast, Martin Luther King Jr. called out to his “white brothers and sisters” several times in his famous speech. Countless other influential people who fought for the rights of the oppressed have aimed for change by pointing out the similarities between all people, and asking members of opposing groups to understand where they’re coming from without attacking those groups.

We’re not doing that anymore. There are a number of factors and I won’t claim to have done the research to prove it statistically, but I think it’s clear that we’re seeing a trend away from the progress of the last 50 years. What we’re seeing now is lots of name-calling, divisiveness, dismissing opinions of people based on their race or gender or socioeconomic status, and we’re seeing this from all sides. This is catalyzed by social media, and has led us to the tension we feel today.

We can change this and get back on track, and I sincerely hope that we will.

If we can open our ears to people with different opinions, listen to them as people and not as the “enemy”, and realize that we’re all just people, we can get back on track. We can agree that everyone, regardless of who they are, faces certain human problems. We can feel for someone from a completely different world because we’ve had the feelings that they are having at some point in our lives. We can start talking to each other logically and clearly, and more importantly we can start listening to each other without labeling or extreme bias. We can stop focusing on what colour our skin is or what gender we are, and realize that none of that is important if we want to become closer to one another, build trust, and ultimately live in a world of true equal opportunity.

The Best Book I’ve Ever Read

I just finished the best book that I’ve ever read, ever. In fact, it’s more than the best book I’ve ever read… I would argue that it’s the best collection of words I’ve ever seen in any sort of literary piece ranging from novels to essays to textbooks.

The book is called The Blank Slate: The Modern Denial of Human Nature by Steven Pinker, a cognitive scientist, linguist, and evolutionary psychologist (on occasion). The contents of the book have pretty dramatically changed the way I think about the world, given me a much clearer understanding of how things work, and given me a ton of insight into the brain and its functions. It has explained to me many behaviours and many human tendencies, and given a lot of credit (and some solutions) to my worry about how the current political climate is affecting our ability to think rationally and reasonably.

The nature vs. nurture debate is a somewhat solved one, in that we all know that pretty much everything is a combination of both. The goal of Steven Pinker’s book is to argue that we rely very heavily on this idea of the blank slate, the idea that everything we do and think is programmed into us from our environment, and that relying so heavily on this very false idea leads to serious issues in policy, education, politics, legal discussions, and more. The first half of the book dispells three popular myths, then discusses how some “hot button” topics such as gender, violence, and children could look when approached through a lens of human nature. Throughout the book, he argues that an understanding of human nature is necessary to understand who we are and how we act, and that challenging socio-political topics cannot be properly answered by ignorantly blaming our environment. In fact, a better understanding of these issues which takes into account the perspective of human nature and evolutionary psychology helps us to make progress toward goals of equality, freedom, and peace.

The first of the three myths he dispells is that of the blank slate itself. The idea that our environment has more of a role than our genes do in terms of our development is false. I’d guess that (especially in the current socio-political climate) the average person would say that over 90% of our thoughts and behaviour are caused by our environment, and that intuition couldn’t be farther from the truth. The best example of the many, many studies that suggest that our genes make up at least 50% of what we consider “personality” is a series of studies with identical twins who were raised in different homes vs. adopted children who are raised in the same home. Time after time, the identical twins showed more similarities in terms of personality, mannerisms, even strange tendencies like twirling a pen when they’re nervous, when compared to the adopted children who were raised with the same parents, same parenting style, same rules, etc. It may be surprising, but you’ll understand if you read the book, which you absolutely should. Ignoring the idea that much of our personality is genetically determined can lead to dangerous decisions, and he outlines that perfectly in his book.

The second is the idea that he calls the ghost in the machine. This is the feeling that there is a “you” inside your head which consists of your conscience and tells your brain what to think, which subsequently tells your body what to do. In philosophy, this is sometimes referred to as mind-body dualism (i.e. the separation between mind and body). The “you” simply is your brain, an extremely complex computer, a circuit of connections and pathways and chemical reactions shaped by your genes and your experiences. With a proper understanding of the brain, there is no need for a soul or a separate “you” that lives inside your head, as if that ghost in your machine can manipulate your thoughts disconnected from your brain.

Lastly, he makes an extremely strong case against the idea of the noble savage. This is the somewhat popular (and completely incorrect) belief that if we didn’t have our current society (technology, capitalism, borders, governments, etc.) that we would live peacefully, like the small tribes of people who have been secluded from modern civilization for hundreds of years. The truth of the matter is that in these pre-state societes, murder rates are unfathomably high, and rape, revenge killings, and violent inter-tribe war are common. The few studies that did find these “peaceful tribes” were completely disproven shortly after their publication, yet their legacy lives on in our ignorant but hopeful minds.

Pinker goes on to talk about some what he calls “hot-button topics” and brings up some interesting, controversial, and I think extremely true things. Among the less controversial are statements like “intelligence depends upon lumping together things that share properties, so we are not flabbergasted by every new thing we see”. This is written when attacking the post-modernist idea that everything is socially constructed, including not only “race, gender, masculinity, nature, facts, reality, and the past” but now extending the list to include things like “authorship, choice, danger, dementia, illness, inequality, school success”, and more. He talks about the mental processes of conceptual categorization and explains how it works and why stereotypes are not necessarily a bad thing, provided you don’t assume that every member of a group shares all of the properties of the stereotype of that group.

 

Another example is when he talks about violence, and how we wrongly blame violent toys and violent media for turning children into violent creatures. We have always been and will always be violent, men will always be more violent than women by nature, and we should try to understand our violent nature in order to correct it and keep improving toward a world where people live happily and peacefully. Not only do we ignore some genetic predisposition toward violence by invoking the blank slate theory, but we also miss the mark in trying to fix the issues that come from it. There’s a whole chapter on this, and then another entire book called The Better Angels of Our Nature where he expands even more on the topic, though I haven’t read that book yet.

The concept of the blank slate is dangerous in a few ways, and I think this is best described by Pinker himself:

The vacuum that it [the blank slate] posited in human nature was eagerly filled by totalitarian regimes, and it did nothing to prevent their genocides. It perverts education, childrearing, and the arts into forms of social engineering. It torments mothers who work outside the home and parents whose children did not turn out as they would have liked. It threatens to outlaw biomedical research that could alleviate human suffering. Its corollary, the Noble Savage, invites contempt for the principles of democracy and of “a government of laws and not of men”. It blinds us to our cognitive and moral short-comings. And in matters of policy it has elevated sappy dogmas above the search for workable solutions.

I highly, highly suggest you read this book. It’s sparked even more curiosity for me to dive into the world of learning about psychology, evolutionary psychology, biology, and philosophy and I think it has presented me with some very good answers. These answers are not only to questions that I hadn’t thought of before, but also intelligent answers to questions which I knew the right answer to, but didn’t know how to express the answer intelligently. This is important because a lot of things in this book can be seen as controversial, but I think many people know that the truth is sometimes not politically correct, and if you can’t defend controversial points well, it discredits the ideas that you might stand behind.

And to end with two quotes for reflection:

“… Popular ideologies may have forgotten downsides – in this case, how the notion that language, thought and emotions are social conventions creates an opening for social engineers to try to reform them.”

“The strongest argument against totalitarianism may be a recognition of a universal human nature; that all humans have innate desires for life, liberty and the pursuit of happiness. The doctrine of the blank slate… is a totalitarian’s dream.”

Deontology vs Consequentialism + Questions

As you might know, I’ve been reading more about philosophy and psychology recently and have been thinking of a lot of interesting questions and learning a ton of new things. Today I’d like to talk about two somewhat opposing schools of ethical thought, describe a few flaws of each, give you my opinion, and ask you some challenging questions that should make you think or make you discuss some heavy stuff with a friend or significant other.

Before diving into deontology and consequentialism, I’d like to present an important term that we should think about while reading this: moral relativism. Moral relativism is the idea that there is no universally correct set of morals, and that your moral code can differ depending on many different factors. Whether or not you believe that all moral principles are universal, some moral principles are universal, or that all moral principles are based on external factors such as environment, it’s an important term to know and think about. I’ll likely give my opinion on this in another post, but for now let’s just keep it in mind.

Deontology

Deontology is an approach to ethics which emphasizes a strong code of moral rules which are abided by no matter the consequence. Some of the most famous deontological thinkers include John Locke and Immanuel Kant, who believed that we should only make moral choices which are universally true and will always be universally true. He suggested to treat humanity “never merely as a means to an end but always at the same time as an end,” meaning that regardless of the outcome, each choice you make along the way is important and should be made in a morally correct way.

Immanuel can or Immanuel Kant?

There are a couple of issues with deontological thinking; first, there are extreme cases where it breaks down. A common example used against this system of ethics is the case where a Nazi officer asks you if you’re hiding a Jew in your house, and you don’t lie about doing it because lying is seen as “wrong”. The result is, of course, a murdered Jew (which hopefully you’re against) when the lie might not have harmed anyone except your moral code.

Second, this manner of thinking only works well if there is a universal right and wrong. While a person could abide by it with their own moral code, the benefit of deontology breaks down as soon as you have conflicting moral codes. Some questions to ask yourself that might challenge deontology include: Is it wrong to kill? What if the person killed your family? What if it’s in self-defense? When is it acceptable to lie? What is the acceptable punishment for a murder?

Consequentialism

On the other side of the coin, we see consequentialism. Consequentialism is concerned with the moral worth of overall consequence of actions, not necessarily the actions themselves. Utilitarianism, made popular by John Stuart Mill and Jeremy Bentham, is a good example of consequentialism; it aims to maximize outcomes for the greatest number of people, or increase happiness, without much regard for the nature of actions that make this possible. Not all consequentialists are utilitarians, but utilitarianism is a type of consequentialism.

Who do you think wins the hair competition between Kant and John Stuart Mill?

One criticism of consequentialism is that it seems to lack empathy, or any general concern for the well-being of individuals. If people are concerned with the “greater good”, they might make decisions that go out of their way to harm others even if the net result is an increase in happiness of a larger number of people. Many people have drawn links between psychopaths and utilitarianism, whether those links are justified or not.

Another important criticism is that the “greater good” in general presupposes that there is some common thing that is undeniably “best”, similarly to the flaw in deontology. It contradicts moral relativism and assumes that one way is the right way, and that everyone should act in a way that best works to achieve this greater good. Depending on your opinions on moral relativism, this might be a downside to this philosophy.

Some Opinions

So where do I stand on this scale of deontology and consequentialism? I think that anyone who knows me could tell you that I’m more of a consequentialist and utilitarian than anything else, but I think everyone is a bit of both.

The way I see it is that we all have a moral code that we try to abide by, and we make exceptions when the consequences of adhering to our moral code are contrary to our intent in keeping that moral code. The justification of our moral code is a personal thing, but my reasons for trying to keep to my moral code include minimizing harm to others, maximizing happiness for myself and for others, treating everyone as equal, being fair in judgments, and being able to observe situations objectively, even situations that I’m involved in. Given these goals, my moral rules (things like not lying, being fair to everyone, not solving problems with violence, etc.) can bend if the consequence of adhering to that moral code is negative. The amount that those rules bend depends on the severity of the outcome, for good or for bad.

A simple example is lying: we know that it’s wrong to lie, but when telling the truth will harm someone unnecessarily and lying will not harm anyone, we choose to tell what we call a white lie. In that case, the moral rule is bent only slightly, and the outcome avoids harm and increases happiness without strongly contradicting any of my deepest moral core philosophies. On the other hand, killing a 70 year old to save a 10 year old would be hard for me to do, because while the 10 year old might have more potential in their future lives and that might be the right utilitarian / consequentialist decision, my moral code tells me pretty strongly that killing is wrong. In my opinion, on a day to day basis, doing the right thing implies doing the thing that will result in the best outcome as long as it doesn’t break your moral code and as long as it doesn’t compel anyone else to break their rules either, but of course there can be exceptions in extreme cases.

So where do you stand on this? Here are some questions you can ask yourself, and maybe ask a friend or significant other if you want some stimulating conversation:

  • How much importance do you put on having a strong moral code?
  • When do you bend it?
  • Would you ever break some of your own moral rules?
  • Is there a univeral set of morals that should apply to everyone?
  • Do our morals come from evolution and millions of years of emotional development, or are they learned? Do they come from a belief or understanding of a higher power or a god? If it’s a mix, how much of an influence does each source have?
  • Can a universal set of rules exist without a universal higher power?
  • Do you think that outcome is more important than the steps taken to reach the outcome?
  • Are the lives of any two people equal? What if one of those people is your family member?
  • Can you think of a time when your moral code was tested (other than that last question)?

I hope you enjoyed this post, and I hope those questions made you think. I’ll write soon about my thoughts on moral relativism and what I call weighted utilitarianism, so keep an eye out if you’re interested!

References:

Consequentialism
Kantian Deontology
Utilitarianism
Moral Relativism
Against Consequentialism – Germain Grisez

An Introduction to Stoicism

Hey all!

I wanted to share some information about what I’ve been reading about recently, and see if I can pull out the most important points in a coherent, understandable fashion.

Marcus Aurelius was a Roman emperor who lived from 121 AD to 180 AD, and is said to be one of the most influential people toward our modern day understanding of Stoicism. I picked up his book, or rather a collection of his writings, called “Mediations” (the Gregory Hays translation) and have been making my way through. It’s essentially an amalgamation of the scrolls on which he wrote his own personal notes, kind of like a diary. The book has no specific order, division of chapters, or anything. Simply small sentences or paragraphs which he wrote at some point to himself, for some reason. It’s inspired me to learn a bit more about the philosophy, and I quite enjoy it.

The greatest thing about Stoicism, in my opinion, is that two of the most influential texts on the subject were written by a slave (Epictetus) and an Emperor (Aurelius)… but the principles still apply. I can’t imagine a better justification for how a philosophy could be applied by all people than the idea that a slave and an emperor can share the same ideas.

So what is it?

Stoicism got its name from the Greek word “Stoa”, meaning porch, because it was taught by Zeno in Athens in a (kinda porchy) place called Stoa Poikile. A philosophy grounded in logic and ethics, Stoicism has many tenets, but the few that sum it up for me are the following:

 

Don’t try to control what is our of your control.

Frustration comes from trying to control things that you can’t control. Accept that things have happened and that you can’t do anything about them now. It would be illogical to get upset or angry at something that you can’t affect, so live in the present.

 

Deconstruct things and see them for what they really are.

A great example of this, which I heard on an episode of the Kevin Rose podcast, was a Louis Vuitton bag. If you strip away all the surrounding stuff (name brand, socio-economic status symbol, etc.) you realize that it’s still just a few pieces of leather that holds some stuff in it, sewn together in a sweatshop in China. It doesn’t matter if it’s Louis Vuitton or a no-name brand, be aware of what the object is.

This doesn’t, however, mean that Stoics try to live without any material objects. From what I gather, the idea is that you set your baseline as the bare minimum that you need. If you need a car, any car that gets you from point A to point B will do. Once that baseline is set, if you have the money for it and feel like it, you can buy a Lexus. But, if ever that was taken away, you’re still above your baseline and should still be equally happy.

 

Make decisions according to the universal logos, follow reason and logic instead of emotions.

The Stoics believe(d) that there is a universal logos, which has been defined many ways, but I believe is properly summed up by this definition in Merriam Webster:

‘Reason, that in ancient Greek philosophy is the
controlling principle in the universe.’

There’s also a strong link between this logos and God, the gods, nature, and other terms. Basically their idea is that there’s a natural flow or order, and that there is universal truth in all things.

 

Eliminate unnecessary speech and action.

“No carelessness in your actions. No confusion in your words. No imprecision in your thoughts. No retreating into your own soul, or trying to escape it. No overactivity.” This quote from Meditations can be a helpful reminder to stay away from useless activity, as it won’t help you and it won’t help the people around you.

“If you seek tranquility, do less. Or rather, do less but do it better.”

 

Perception is the most powerful and most dangerous tool humans have.

Your reaction to people’s actions is what decides your happiness, things can’t affect you if you don’t let them. There is very little that is “good” or “bad” in the world; most things, actions, circumstances, etc. simply exist, there’s no need to label them. With this kind of objective approach, it’s easier to see things clearly. Humans are very capable of deciding their reaction to situations, feelings, and emotions, but we are also extremely affected when we let our emotions get the better of us.

 

I think that this book, and this philosophy in general, has a lot to teach. I don’t agree with every single point that I’ve read about Stoicism, but I do believe that everyone could learn something from reading about this philosophy. Especially in times like these, with the world seemingly so divided in thought and unable to have discussions about their differing opinions, we could all use a bit of emotional control in our thoughts and reactions.

Some other suggested books on Stoicism (which I’ll get to) come from Seneca, and Epictetus, and there have been a number of newer authors who write about Stoicism as it applied nowadays, such as Ryan Holiday’s very successful book The Daily Stoic.

See you soon!

 

Cognitive Biases to Watch Out For When Running a Business

Cognitive biases are everywhere and affect our daily lives in a huge way. They affect the way we think, the way we act, and the way we interpret information. A cognitive bias is essentially when our brain slips up and uses some illogical reason to come to (sometimes harmful) conclusions. These slip-ups are so common and so predictable that we can actually quantify, categorize, and test for them.

Today, I wanted to talk about a few cognitive biases that can specifically relate to the workplace, and describe how we might be able to get around them to produce better results and happier people.

 

Survivorship Bias

I put this one at the top because I believe it’s the one that we’re most guilty of in the games industry. Survivorship bias is looking at the successes without acknowledging the failures, and it comes from the fact that most of the people we see are the ones who have succeeded. The companies that failed, well, they aren’t around to tell you about how they failed. This is clear in the games industry when we go to conferences, listen to speakers, meet people at networking events, and so on. The people that we meet are the ones who were at least successful enough to be at the event, and that’s already a big step up on the majority of start-up studios.

There has been a recent trend toward listening to people’s failures, which I think is a great thing. People are becoming more open about their failures, and we’re seeing things like “failure workshops” at the Game Developer Conference which is a series of talks about what went wrong and why.

My first tip to avoid survivorship bias is to start small and dig deep. It’s harder to find stories about failures because people are ashamed about it or these stories aren’t visible on the platforms you’re looking at. So start small, by looking at slight failures. In the case of games, this might be a game that appeared to have great hype but only sold 2,000 copies. Why didn’t it sell well? What went wrong? This should be easy enough to find by looking at public-facing information: trailers, reviews, etc. Then, try to go a little deeper. Find some games that look like they might have had a chance, but have no reviews and no public statistics. Sometimes, you might have to reach out to developers directly and ask them what went wrong, and usually (in our industry at least) they’ll be happy to tell you.

For an interesting resource about failure, autopsy.io has a list of failed startups and the reasons why they failed.

The second tip is to strip it down to its core. If you see something that worked, don’t focus on small details or hang on to gimmicks; the game didn’t sell because the main character had a hat, the game sold because the main character was relatable and their motivation was easily understood. This still falls into the trap of looking at successes, but it’s both less likely to lead you down a false path and more likely to allow for pattern recognition if you can strip it down to the basic building blocks of the success. Replace “the art style was pixel art with watercolour painted backgrounds” with “the game had a distinct, captivating art style”.

 

Conservatism Bias

“You can’t teach an old dog new tricks.” I’ve heard this a lot about companies; “stick to what you know, make small incremental improvements”, etc. Conservatism bias is rejecting new information and not being willing to venture into the new because the old way seems to work just fine.

I’d argue that this approach doesn’t work in any industry. I’d say toilet paper is probably one of the most basic products I can think of, where it hasn’t changed in years. But if you want to be competitive in the toilet paper industry, I would guess that you still can’t be afraid to push the technology, push the manufacturing techniques, or push the boundaries of marketing efforts.

In the games industry this is especially true. The technology is changing so quickly and the market is changing so quickly that we have to adapt with the times. Not only do we have to adapt in terms of the games we make, but we also have to adapt in terms of the way we manage our people, manage our workspace, and manage our lives.

There are two suggestions I have to help with this bias. The first is to keep your eyes and ears open. Don’t say no to ideas flat out, and listen to what other people are saying. The second suggestion would be to respect your peers. Your colleagues, partners, employees, and contacts often have more experience and knowledge in certain fields than you do. To step outside of the box, sometimes you need to trust in others.

 

Pro-Innovation Bias

This is the complete flipside of the previous point. Pro-innovation bias involves being overly excited about new technology or innovation without thinking logically about potential outcomes. A good (made up) example could be trying to make a game with photo-real 3D graphics for mobile using new technology that requires 8x more RAM than other games. While the technology might be cool, our phones aren’t ready for that kind of thing, and the idea might fall flat on its face… if it has a face.

This isn’t to say to avoid innovation… not at all. The key is thinking realistically and logically about the limitations and the potential of the new innovation and deciding whether or not it’s a path you want to go down.

The most important thing to do to avoid this bias is to do your research. Is the market ready? Is the technology there? Is there a demand? Can you create a demand? A cool idea is cool, but that’s not necessarily a good enough reason to commit significant time and money to it.

 

Outcome Bias

Survivorship bias and outcome bias can be closely linked in the field of video games. We often judge our decisions based on the outcome of the situation, even if it wasn’t necessarily the right decision. That’s the core of outcome bias, and it can be dangerous, especially when the sample size of your “experiments” are so small. For example, if you make a decision pertaining to one game and it works, you might be likely to think that that was the right decision simply because it worked. Another company may make exactly the same decision, and it doesn’t work out for them.  In fact, even your own choice that works once (yes, we definitely need a live-recorded trailer because we had one last game!) won’t necessarily work the second time around… you’re probably missing a piece of the puzzle.

I think that one way that we can try to avoid the bias is, as I said previously, do your research. If you can find cases where the same decision led to failure, while in your case that decision led to success, there’s probably another factor at work. Another important way to avoid this bias is to argue your decisions based on facts or logic. I mean, the whole point of avoiding these biases is that you make your decisions based on logic, but if you can defend your original decision based on logic and not based on evidence, you have a much stronger argument. That way, when you make the decision again, you won’t succumb to this bias.

These cognitive biases can be found in this neat little infographic (which has been re-posted everywhere). There really are a million of these, and we could talk about them for days… but here I chose to focus on a few specific ones. Another great resource is this talk from my friend Dan Menard from Double Stallion Games. Seriously, go watch it. But read the paragraph below first 🙂

An interesting little experiment to try involves going through a day questioning your own decisions and actions, and really trying to take a 3rd person observer seat of yourself to see what kind of biases affect your decisions. Everyone does it, but being aware of it will likely lead you to more logical decisions in the future. I hope this article helped in some way to open your eyes a bit to things to watch out for when in a leadership role, be it in game development or in any other field.