“That’s a good location, but I think over there is a better spot. You need brick, and it will help you get brick when you build a settlement”. William ignored my advice as he built his road in a no man’s land section that didn’t further his future ability to collect resources.
This was our second night in a row over Christmas break playing “Settler’s of Catan”. I grew up playing this board game, eventually introducing it to my wife in college. She became so good, that she began to win, and when she got a few victories in a row, she stopped playing, so she could always remind me of her better play.
With my sons getting older, I found an opportunity to reintroduce it into the family culture. I love the game because it has a good mix of strategy, table talk diplomacy, and old-fashioned fortune. You may pick the good numbers, but some games they never get rolled, and your plans languish as your son buys development card after development card.
William played well for his first game. I think he got some lucky numbers, and despite not having a key resource, he built up all his settlements into cities. It was such a joy to watch him get his footing in a game I loved growing up. In sharing a love from my childhood, I found the rewarding opportunity to coach William in his first time playing. It became a good reminder that he is a free person, and as much as my decades of experience and expertise come into the game, it’s still his decision about where to place that road. I think he put it in the wrong spot, but that’s life.
As Jeff and I discussed ChatGPT in our podcast, the need for expertise to come into play when using any service that’s going to help you do your job. However, this game tonight reminded me that sometimes, people won’t take our advice. They may have their designs, or they just may want to take the resources, or answer, dealt them, and solve their problems.
ChatGPT sounds so authoritative because the response is in a format we’ve been shaped to take as authoritative. It’s in a familiar format, and it speaks with authority. It doesn’t second guess itself, and if it’s not sure, it tries to bring different perspectives to give the appearance of balanced consideration.
But it’s just a correlation machine. The example that exemplified this for Jeff was its answer that Roscoe Conkling wasn’t in favor of civil rights. This isn’t true, since Conkling was a proponent of the 14th amendment. You can see how the machine gets this correlation wrong. There is a lot of writing about Republicans opposing civil rights, especially recently online. The correlation engine inside ChatGPT would conflate any republican with an anti-civil rights stance, even if that’s not accurate in general. The challenge now comes to us to make sure that when we use a chatbot, we know more than enough about the subject area to say whether this response is correct.
It’s just like in life, where we still need to guide those under our care, and use our expertise to help them see the truth. Sometimes in life, we try to guide those under our care, but they don’t listen, just like in Settler’s of Catan.
Katie ended up winning in the end. I hope I can get her to play again.
Photo by Galen Crout on Unsplash