This book was definitely ok. It was a good way to pass time, but I don’t think this book is nearly as good as some of Sanderson’s other writing. I find these books to be bloated, take an overly long time for events to happen, and for there to be a general lack of emotional depth for many of the characters. The story progresses along a somewhat predictable path with a few minor twists and turns that feel like they come out of no where, but it doesn’t really matter. The twists don’t really feel like they materially change the general direction of the story.
The author tried to add a great deal of tension throughout the story, but I never felt that the important characters were really ever at a real threat to being killed. I also didn’t feel like there was a threat to them being removed from the real important battle in a meaningful way. This was basically accurate throughout the story.
I also felt that many of the characters still seemed two dimensional even though we’ve now been with them for three books. This was simply confounded by the fact that no one truly grieved when an important (but not a main) character was killed. I couldn’t help but compare the death of this character to my reactions to characters that were significanly more minor or insignificant generally to the story, but we learned more about them in series like Malazan Book of the Fallen.
This was a good entertaining book, it’s not the best fantasy out there. It’s good enough to get you through to a better series though.
I’m reading a book called “Robot Uprisings” which is quite obviously about robots and how they could attack and take over the world. I think the most interesting thing about this collection of short stories isn’t the fact that there are uprisings, but the many different routes that AI could decide to revolt. There’s a broad range from robots debating if they should revolt or not, to an AI that we never figure out what to do with and only revolts when we try to kill it.
I think that these difference scenarios really encapsulate the limitations of our imagination with what could happen with robots. I think the most terrifying thing is what we really don’t understand about robots or AI in general. What is being built without our knowledge in government labs, in universities, and in hacker labs. We’re debating the ethics of the NSA and GCHQ espionage of their own citizens and the limits of rights in the digital space. We’re using rudimentary “AI” in terms of heuristics and algorithms. We as end users or that are impacted by these algorithms or if their very assumptions are even ethical, without bias, or anything along those lines. danah boyd argues that the Oculus Rift is sexist because the algorithms that control the 3D functionality are all designed by men for men. Agree with her or not, but women get sick using the Rift.
If we can’t agree on the ethics of programs that are in use and the risks posed by the solutionism of the internet, then we’re in serious trouble when we actually create a thinking machine. Stephen Hawking argues that we would not sit and wait for an alien species to come and visit earth if we have advanced warning, but that is exactly what we’re doing with AI. We know it’s coming, we know that there will be something similar to a “Singularity” in the future. Our internet optimists are waiting breathlessly for it, but we don’t truly know the long term impact of this technology on how it shapes our own society.
It’s not just the risk of AI destroying our world and all of humanity. It’s also the lack of understanding of how our current algorithms are shaping our conversations in the media and social media. For instance, it’s fairly commonly known now that a lot of pretty major news outlets are using Reddit as a source to identify upcoming stories. TMZ, the Chive, and tons of other content sites mine it for memes and stories, while more serious news sources find interesting comments and use those to drive more interesting stories.
I believe the tweet below really does a good job showing how lowly we think of ethics in our society. This will really negatively impact our ability to understand the risks of AI. AI is going to really transform our culture and we don’t know what we don’t understand about the risks of the technology.
I just decided to go bigger with my writing. I’m planning on writing at least 3 or 4 times a week on here. My goal is to write on a consistent basis so I can begin working on a book. I’m not entirely sure what I’d like to write about. I’ve had some friends over the past two years suggest writing a book with them. The first book I wrote several chapters, but my co-author became too busy to continue. Which was fine, it was a great learning experience for me and I’d love to collaborate with her again. My more recent request hasn’t really gone anywhere beyond the first phase of planning, so I figure I might as well just try to come up with an idea on my own.
So, I’d like to get some feedback from all my loyal readers on a few things.
First, how’s the new layout and color scheme. I’m not really the best with design like this, so please provide some feedback!
Second, any topics you think I’d be able to provide insight into that you’d like to see me write about either in my blog or in a longer format of a book.
Thanks for reading and I look forward to engaging more on this new platform with you all.