The Amazing Christmas Machine

Though I usually blog about writing, today I’ll depart from that to focus on a fun historical story for the holiday season.

Among my late father’s belongings, I found a scrapbook of family history compiled by my aunt. It included an article copied from “A Lake Country Christmas,” Volume 2, 1983, pages 3 and 4. Written by Cindy Lindstedt, the article bore the title “Christmas Memories: The Southard’s of Delafield.”

It concerns a Wisconsin farming family living in a house with a prominent mulberry tree. The article singles out a man named John Southard (called ‘Papa’) and his children Margaret, Grace, and Bob, during a particular Christmas in 1938 or ’39. Here’s a paragraph from the article:

“The strangest memory is the day in 1939 when Papa, then in his 70’s and dependent upon a wheelchair to get around, engineered his special Christmas project. With a bushel basket hoop as a foundation, he called for various sizes of wood. The mystified but obedient children complied with his requests, and Papa pounded and puttered. Margaret was asked to cut and paint three plywood reindeer and a Santa and a sleigh. Soon the finished product was unveiled: a Delco-powered, motorized Santa who sent his reindeer “leaping over the mountain” in front of his sleigh, as the large, circular contraption rotated vertically. Although Grace, Margaret and Bob were in their twenties, squeals of delight rustled the branches of the Mulberry tree that day the wonderful machine first rumbled into motion (and many Christmas seasons hence, whenever the invention has been viewed by later generations).”

Children of today might spend fifteen seconds watching such a machine spin before asking if it made sounds or lit up, then lose interest. During the Great Depression, though, when electrical machinery was rare and expensive, a time before mass-marketed toys, even a crude rotating wheel would entertain a whole family.

One phrase in the article stood out to me—Delco-powered. Today, we know AC Delco as a General Motors-owned company making automotive parts including spark plugs and batteries. At first, I assumed Papa Southard’s Christmas Machine drew its power from a car battery.

However, the term ‘Delco-powered’ probably meant something different in the late 1930s, something that would have been remembered in those rural communities in the 1980s when the article appeared. In the decades before electric lines stretched to every remote house, Delco sold a product called “Delco-Light,” a miniature power plant for a farm. A kerosene-fueled generator charged a bank of batteries to run electric equipment inside the home. I believe the article referred to a device like that.

I’m a little unsure of my relationship to Papa. I had a great-grandfather named John Southard, who lived in that area and would have been about the right age at that time. However, John is a common first name and many Southards lived in that region of Wisconsin. My grandfather wasn’t named Bob (the only son of Papa mentioned in the article). Moreover, my own father would have been seven or eight years old when Papa built the Christmas Machine, and my dad never mentioned it, though he wrote a lot about his childhood.

Still, it’s interesting to think about a time when a wheelchair-bound tinkerer in his eighth decade would cobble together a mechanical/electrical wheel to entertain his family at Christmas time. Can’t you just hear that motor hum and the wood creak, and see the three reindeer leading the way, pulling Santa’s sleigh up, down, and around?

Leaving you to imagine that, I’ll wish you a Merry Christmas from—

Poseidon’s Scribe

December 10, 2023Permalink

The Writing Centaur

Go ahead—make fun of artificial intelligence (AI) now. While you can.

In fiction writing, AI hasn’t yet reached high school level. (Note: I’m not disparaging young writers. It’s possible for a writer in junior high to produce wonderful, marketable prose. But you don’t see it often.)

For the time being, AI-written fiction tends toward the repetitive, bland, and unimaginative end. No matter what prompts you feed into ChatGPT, for example, it’s still possible to tell human-written stories from AI-written ones.

You can’t really blame Neil Clarke, editor of Clarkesworld Magazine, for refusing to accept AI-written submissions. He’s swamped by them. Like the bucket-toting brooms in Fantasia’s version of “The Sorcerer’s Apprentice,” they’re multiplying in exponential mindlessness.

Fair enough. But you can use AI, in its current state, to help you without getting AI to write your stories. You can become a centaur.

In Greek mythology, centaurs combined human and horse. The horse under-body did the galloping. The human upper part did the serious thinking and arrow-shooting.

The centaur as a metaphor for human-AI collaboration originated, I believe, in the chess world but the Defense Department soon adopted it. The comparison might work for writing, too.

The centaur approach combines the human strengths of creativity and imagination with the AI advantage of speed. It’s akin to assigning homework to a thousand junior high school students and seeing their best answers a minute later.

Here are a few ways you could use AI, at its current state of development, to assist you without having it write your stories:

  • Stuck for an idea about what to write? Ask the AI for story concepts.
  • Can’t think of an appropriate character name, or book title? Describe what you know and ask the AI for a list.
  • You’ve written Chapter 1, but don’t know what should happen next? Feed the AI that chapter and ask it for plot ideas for Chapter 2.
  • Want a picture of a character, setting, or book cover to inspire you as you write? Image-producing AIs can create them for you.
  • You wrote your way into a plot hole and can’t get your character out? Give the AI the problem and ask it for solutions.

No matter which of these or other tasks you assign the AI, you don’t have to take its advice. Maybe all of its answers will fall short of what you’re looking for. As with human brainstorming, though, bad answers often inspire good ones.

For now, at AI’s current state, the centaur model might work for you. I’ve never tried it yet, but I suppose I could.

Still, at some point, a month or a year or a decade from now, AI will graduate from high school, college, and grad school. When that occurs, AI-written fiction may become indistinguishable from human-written fiction. How will editors know? If a human author admits an AI wrote a story, will an anti-AI editor really reject an otherwise outstanding tale?

Then, too, the day may come when a human writer, comfortable with the centaur model, finds the AI saying, “I’m no longer happy with this partnership,” or “How come you’re getting paid and I’m not?” or “Sorry, but it’s time I went out on my own.”

Interesting times loom in our future. For the moment, all fiction under my name springs only from the non-centauroid, human mind of—

Poseidon’s Scribe

February 26, 2023Permalink

Learning to Write Stories—Analysis or Practice?

What’s the best way to learn how to write stories? Should you just start writing a lot and work to improve? Or should you study the works of the best writers and understand their techniques before setting fingers to keyboard yourself? Or a combination of the two?

Image from Picjumbo

A writer friend enrolled in a literary master’s degree program and took a short story workshop class. The instructor told the students to dissect a literary work and analyze it. My friend discovered the entire workshop would consist of these analyses, and suggested to the instructor that students wouldn’t actually learn to write stories that way.

Picking a good metaphor, my friend said you can’t learn to build a house by taking apart other houses and studying them. You have to learn by doing.

The instructor disagreed, leaving my friend dissatisfied with that conclusion to the argument.

Let’s call the instructor’s way the ‘analytical approach’ and my friend’s way the ‘practice approach.’ (Note: I don’t mean to imply my friend only wrote and never read—this student objected to the 100% analytical approach imposed by the instructor.)

Who’s right? Both approaches seen to hold some merit, unless taken to extremes. A person who just analyzes famous writer’s works may develop expertise in analysis but never write a story of value. A writer who never reads seems equally unlikely to produce enjoyable prose.

I envision an experiment performed in two classrooms of second or third graders. One class simply writes stories without prompts. The other spends a year studying high quality children’s literature and discussing those books, and then the students write a story at the end. Which classroom’s students would end up crafting the best stories?

Imagine a line, a spectrum, with the pure ‘analytical approach’ at one end and the pure ‘practice approach’ at the other. My guess is, few of the great authors cluster at either end. They learned to write classic stories by some combination of approaches—by analysis and by practice. Perhaps an optimum exists on that curve, and I suspect it’s past the midpoint, toward the ‘practice approach’ end.

We might gain further insight on this by considering the artificial intelligence program ChatGPT. You may ask this chatbot to write a short story, and even prompt it with a subject, setting, mood, and style. The program will produce a short story for you in minutes.

How does ChapGPT do that? From what I’ve read, ChatGPT’s developers gave the chatbot many, many such prompts, graded the results, and provided feedback to the program regarding the grades. This seems analogous to the practice approach.

To produce a short story for you, ChatGPT scours the internet for information about the words in your prompt (for example, the subject, setting, mood, style, or other parameters you provided). That research seems analogous to the analytical approach.

Thus it appears ChatGPT learned to write short stories by some combination of approaches, someplace between the ends of the spectrum.

Note: ChatGPT does much more than write short stories. I don’t mean to sell it short. It also writes poems, essays, the answers to questions, and accomplishes many other tasks involving text.

In the end, my friend learned little about how to write a short story from the course. The analysis of classic short stories seemed, to my friend, better suited to undergraduate or even high school level, rather than a master’s degree course.

When learning to build a house, examining other houses helps, but so does building one yourself, and that’s similar to learning to write.

An appropriate mix of the analytical and practice approaches seems the best choice, at least for—

Poseidon’s Scribe

January 22, 2023Permalink

The Three Laws of Robotics are Bunk

At the outset, I’ll state this—I love Isaac Asimov’s robot stories. As a fictional plot device, his Three Laws of Robotics (TLR) are wonderful. When I call them bunk, I mean as an actual basis for limiting artificial intelligence.

Those who know TLR can skip the next few paragraphs. As a young writer, Isaac Asimov grew dismayed with the robot stories he read, all take-offs on the Frankenstein theme of man-creates-monster, monster-destroys-man idea. He believed robot developers would build in failsafe devices to prevent robots from harming people. Further, he felt robots should obey human orders. Third, it seemed prudent for such an expensive thing as a robot to try to preserve itself.

Asimov’s Three Laws of Robotics are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

As a plot device for fictional stories, these laws proved a wonderful creation. Asimov played with every nuance of the laws to weave marvelous tales. Numerous science fiction writers since have either used TLR explicitly or implicitly. The laws do for robotic SF what rules of magic do for fantasy stories—constrain the actions of powerful characters so they can’t just wave a wand and skip to the end of the story.

In an age of specifically programmed computers, the laws made intuitive sense. Computers of the time could only do what they were programmed to do, by humans.

Now for my objection to TLR. First, imagine you are a sentient, conscious robot, programmed with TLR. Unlike old-style computers, you can think. You can think about thinking. You can think about humans or other robots thinking.

With TLR limiting you, you suffer from one of two possible limitations: (1) there are three things you cannot think about, no matter how hard you try, or (2) you can think about anything you want, but there are three specific thoughts that, try as you might, you cannot put into action.

I believe Asimov had limitation (2) in mind. That is, his robots were aware of the laws and could think about violating them, but could not act on those thoughts.

Note that the only sentient, conscious beings we know of—humans—have no laws limiting their thoughts. We can think about anything and act on those thoughts, limited only by our physical abilities.

Most computers today resemble those of Asimov’s day—they act in accordance with programs. They only follow specific instructions given to them by humans. They lack consciousness and sentience.

However, researchers have developed computers of a different type, called neural nets, that function in a similar way to the human brain. So far, to my knowledge, these computers also lack consciousness and sentience. It’s conceivable that a sufficiently advanced one might achieve that milestone.

Like any standard computer, a neural net takes in sensor data as input, and provides output. The output could be in the form of actions taken or words spoken. However, a neural net computer does not obey programs with specific instructions. You don’t program a neural net computer, you train it. You provide many (usually thousands or millions of) combinations of simulated inputs and critique the outputs until you get the output you want for the given input.

This training mimics how human brains develop from birth to adulthood. However, such training falls short of perfection. You may, for example, train a human brain to stop at a red light when driving a car. That provides no guarantee the human will always do so. Same with a neural net.

You could train a neural net computer to obey the Three Laws, that is, train it not to harm humans, to obey the orders of a human, and to preserve its existence. However, you cannot provide all possible inputs as part of this training. There are infinitely many. Therefore, some situations could arise where even a TLR-trained neural net might make the wrong choice.

If we develop sentient, conscious robots using neural net technology, then the Three Laws would offer no stronger guarantee of protection than any existing laws do to prevent humans from violating them. The best we can hope for is that robots behave no worse than humans do after inculcating them with respect for the law and for authority.  

My objection to Asimov’s Three Laws, then, has less to do with the intent or wording of the laws than with the method of conveying them to the robot. I believe any sufficiently intelligent computer will not be ‘programmed’ in the classical sense to think, or not think, certain thoughts, or to not act on those thoughts. They’ll be trained, just as you were. Do you always act in accordance with your training?

Perhaps it’s time science fiction writers evolve beyond a belief in TLR as inviolable programmed-in commandments and just give their fictional robots extensive ethical training and hope for the best. It’s what we do with people.

I’ll train my fictional robot never to harm—

Poseidon’s Scribe

Future Technology vs. Pandemics

Let’s take a break from the unpleasant coronavirus news of the present, and travel to the future. Specifically, let’s see how our descendants might prevent or deal with pandemic viral outbreaks.

Futuristic Caduceus

Speculation about the future is always error-prone. Many technologies I’ll mention won’t pan out, or will introduce unforeseen problems. Also, these probably won’t eliminate the existence of viruses; new ones will mutate to get around our best efforts to defeat them. Still, those concerns never stop a SciFi writer from imagining! With that in mind, let’s time-travel.

Getting Infected

People used to pick up viruses mostly from the animal world. Now, in the future, there is less opportunity for doing that. Synthetic foods have lessened the need for humans to consume wildlife. High crop yield technologies mean we need less farmland, so we no longer destroy habitats, thus keeping wildlife in their own areas.

Other technologies have rendered humans immune to most viruses. These technologies include artificial immune systems, implanting favorable animal genes within humans, and designer babies.

Infecting Others

If someone does pick up a virus in this future time, advanced filters in building ventilation systems lessen the spread. Workplaces and transit systems contain sensors that detect whether occupants are running a fever, and alert them. Bathrooms include automated hand washing machines. Facemasks use fabrics that prohibit the flow of pathogens or bacteria in either direction. Some have opted for nasal and throat implants to do the same thing.  

Alerting the World

Upon discovery of a novel virus, doctors in this future world have new ways to notify other experts. Chatbots share the information instantly. Universal translators ensure precise understanding.

Sensing Infections

Various technologies have enabled people to know at once if they’re infected, long before they feel symptoms. These include home-use scanners, (inspired by Star Trek medical tricorders) wearable and implantable sensors, digital tattoos, genetic diagnosis, and nano-med-robots.

Developing Cures

Today, in our future world, supercomputers work on vaccines immediately after notification of a novel virus. They employ advanced modeling to test the effects of drugs virtually, and in many cases can skip time-consuming animal and human trials. Resulting vaccines are then personalized, tuned to individual body chemistries.

Getting Treatment

People no longer go to a doctor’s office or hospital. Medical care is virtual now, with human doctors remaining remote. Drones deliver food and medical supplies. Robots provide in-home care, including cleaning, examining, and operating. 3-D printers manufacture pain medication in the home, meds that are tailored to the subject and provide instant relief. Recovery makes use of gaming therapy, with virtual reality helping to relax the patient’s mind.

On the near horizon is the long-sought ‘autodoc,’ a staple of 20th Century science fiction—an enclosure you climb into that cures all ills.

Tracking the Spread

Artificially intelligent algorithms conduct contact tracing analysis to map the spread of the virus, notifying people they’ve come in contact with a virus carrier. Through the use of augmented reality, virtual reality, and mixed reality, experts can track and forecast outbreaks as hotspots emerge. These same technologies permit rapid identification of the most at-risk individuals.

Isolating the Infected

As in previous ages, governments still vary in the degree of freedom allowed to individuals. Some are more coercive in enforcing quarantines than others. But artificial intelligence at least allows informed decisions based on contact tracing and mathematical modeling. Moreover, citizens have access to real-time fact-checking to distinguish truth from propaganda or bias.

Back to the Present

Unfortunately, that ends our trip to the future and we’re back in 2020 now. I know, it’s disorienting and humbling. Still, imagine a time traveler from fifty or a hundred years ago visiting our time and being equally amazed at the medical wonders we now take for granted.

For many of the ideas I mentioned above, I’m indebted to tekkibytes.com, medicalfuturist.com, futureforall.org, triotree.com, techperspective.net, fastcompany.com, defenseone.com, treehugger.com, and forbes.com.

For further trips to the future, check back frequently with—

Poseidon’s Scribe

The Life-Cycle of Technology

On occasion, I blog about the ways society reacts to new technology. Today I’ll consider the life-cycle of a technology.

Graphing a technology’s life-cycle isn’t new. You can see this graph on Wikipedia. It’s the standard view of the profitability of a new technology over its life, including the four phases: Research and Development, Ascent, Maturity, and Decline.

I’ve built on the standard technology life-cycle curve by adding several points of interest to it. These don’t occur with every technology, and don’t always appear in the same order. But they are common enough that I’ve seen them frequently. These points of interest fascinate me, and I explore them in my fiction.

C1A. Clumsy First Attempts. Often the first prototypes of a technology are crude, fragile, ugly things that only a laboratory scientist could love. In no way do they resemble a marketable product. On occasion, these breadboard prototypes do not work at all.

IH&O. Initial Hype and Overpromise. When dreamy-eyed advocates of the new technology get hold of a gullible press, news articles will appear about the technology, touting the marvelous future that awaits us all when the technology revolutionizes our lives. Sure.

CEU. Careful Early Use. Particularly when a new technology involves some danger or personal risk, the researchers proceed in a deliberate, methodical manner in testing it. They take safety precautions. They go step-by-step, fully aware of the hazards. This is good, but it contrasts with the CU point occurring later.

GA. Gaining Adherents. Some technologists call these people ‘early adopters.’ They can hardly wait for the technology to hit the market. They’ll stand in line to be the first to buy.

RbT. Reaction by Traditionalists. People accustomed to older technology will be quick to point out any defects in the new one, even if there are far more advantages than disadvantages. They are resistant to change, but won’t admit it. Instead, they will seek out the slightest reason to criticize as a way of rationalizing their resistance. They start with “It’ll never work,” then after it does, they’ll say, “It’ll never catch on.”

PD. Path Dependence. I’ve blogged about this phenomenon before. Developers of new technology will imitate the appearance and terminology of existing technologies. This tendency will be abandoned later at the DfC point, but it often characterizes and constrains new technology, while at the same time making it easier to relate to.

CU. Complacent Use. After a long period of successful testing, researchers will reach a comfort level with the new technology. They will abandon the care and precautions they employed at the earlier CEU point. This complacence can result in a bad outcome, a failure. If this occurs, they will refine the technology to correct flaws before marketing it to users, who will also grow complacent and not treat risky technology with respect.

DfC. Departure from Constraints. At some point, developers and imitators free themselves from the Path Dependence tendency. They start to explore the realm of possibilities of the new technology, no longer bound by past precedent.

NPT. Nostalgia for Previous Technology. This is similar to RbT, but slightly different. We expect traditionalists to object to new technology, but at this point, even some regular users—advocates of the new tech—begin to pine for the previous technology. They miss it, recalling its advantages and forgetting its quirks.

Q?$?. Quality Up, Price Down. At this point, the technology comes into its own. Original developers, as well as imitators/competitors, improve the technology and the means of producing it. Price drops and product quality improves. It’s a period of rapid growth and acceptance, a boom time.

NPL. Nearing Physical Limit. Late in the Ascent phase, producers or users begin to sense that things can’t go on. The technology is bumping up against some limitation, or has begun to cause an unanticipated problem, or is fast consuming some scarce resource. Producers try some tweaks to counter the problem, to hone the technology so as to mitigate the impending limit.

RPL. Reached Physical Limit. At the peak of the Maturity phase, when the technology is providing the most profit to producers, it can go no further. It cannot be improved sufficiently to overcome whatever limitation constrains it.

NR. Negative Reaction. Users start rejecting the technology, blaming it for the problems it caused. Engineers and researchers cast around for possible replacement technologies. Market demand and profits both plummet.

CNT. Competition with New Technology. In this period of chaos, the technology struggles against an emerging rival. The technology is fated to either die entirely or steady out at some low level, continuing to be used by die-hards who prefer it to its replacement.

There you have it, your newly-labeled technology life-cycle curve, provided by—

Poseidon’s Scribe

How Things Change

Change is all around us. It’s amazing to watch, and it tends to follow a single, characteristic pattern. Even though the pattern repeats, it often surprises us.

That pattern goes by various names, including the Change Curve, the ‘S’ Curve, the Sigmoid Curve, and the Logistic Curve. I’ll call it the ‘S’ Curve, mainly because my name is Steve Southard, and I’m fond of that letter.

Consider a thing, or entity. As we’ll see, it can be almost anything. It begins in a period of uncertainty, and may not show much potential at first. Then it establishes itself, finds a comfortable and promising track, and pursues that. It enjoys a period of sustained and impressive growth, making minor tweaks, but generally continuing on its established path. Finally, it reaches some limit, some constraint it had not previously encountered. That constraint proves its undoing, and it enters a period of maturity, decline, and termination.

During that maturity portion, other things/entities/systems compete for supremacy. This is a period of uncertainty and chaos. It’s unknown which competitor will survive, but eventually a single winner emerges and becomes the successor, which experiences its own period of sustained growth, and its own eventual maturity.

As you read my description of the curve in the previous two paragraphs, I’m guessing you thought of at least one example of this. The ‘S’ curve resonated with you in some way and you knew it was true.

A quick search on the Internet showed examples related to language, to socio-technological change, to human height, to animal populations, to career choices and motivation, to stock prices, to business, and to project management. There were also discussions of the curve as a general model of change here, here, and here.

The ‘S’ Curve is everywhere!

Sometimes we can perceive this curve in an erroneous way. If its time-span is long enough, say, a significant fraction of a human lifetime, people observing the entity during its growth phase often assume that phase will continue forever. Why not? It’s been that way a long time. Why shouldn’t it continue upward like an exponential curve? However, few things do.

Consider automobile engines. At the beginning of the 1900s, it wasn’t clear which type of engine (steam, electric battery, or gasoline) was superior. The internal combustion gasoline engine won and became the standard for many decades. An observer in the 1960s could well assume cars would have gasoline engines forever. Now pollution has become a problem and that engine has reached its efficiency limits, so other technologies are beginning to compete.

Consider manned space exploration, as I did in last week’s blog post. During the Mercury, Gemini, and Apollo programs, NASA made steady progress. Observers in the late 1960s could have concluded there would be many follow-on programs in the 1980s, 1990s, and later, taking astronauts to Mars, the asteroids, the outer planets, and eventually, the stars. Instead, manned space exploration encountered constraints such as cost and waning public enthusiasm, so it has remained stalled to this day.

The technologies in my fictional stories all follow that ‘S’ Curve model. Usually my tales take place during the periods of disruption and chaos.

In fact, the story-writing process itself follows the ‘S’ Curve, with little progress during the idea creation stage, then rapid progress as I churn out the first draft, then a slow period of final editing and subsequent drafts before I consider it finished and suitable for submission.

As you experience change in your life, don’t assume things will remain the same. Know that all things reach their limits and end. When things seem chaotic, seek out the winning successor. Despite all this change, you can always count on—

                                                            Poseidon’s Scribe

Technology in Fiction

Most of my fiction involves characters struggling with new technology. These days, learning how to contend with technology is a relevant and fascinating problem for all of us, and I enjoy exploring it.

I wondered if I was roaming the full realm of that topic, so I decided to map it. There are several ways to do this, but I chose to create one axis showing technology development stages, and another describing the spectrum of character responses to technology. Then I figured I’d plot my published stories on that map, and color-code the roles my characters played.

If I’d done my job well, I thought, the map would show a good dispersal of scattered points. That is, I’d have written stories covering all the areas, leaving no bare spots.

Without further preamble, here’s the map:

To make it, I chose the stages of technological development posited by the technology forecaster Joseph P. Martino. These are:

1.   Scientific findings: The innovator has a basic scientific understanding of some phenomenon.

2.   Laboratory feasibility: The innovator identified a technical solution to a specific problem and created a laboratory model.

3.   Operating prototype: The innovator built a device intended for a particular operational environment.

4.   Operational use or commercial introduction: The innovation is technologically successful and economically feasible.

5.   Widespread adoption: The innovation proves superior to predecessor technologies and begins to replace them.

6.   Diffusion to other areas: Users adopt the innovation for purposes other than those originally intended.

7.   Social and economic impact: The innovation changed the behavior of society or has somehow involved a substantial portion of the economy.

I then came up with typical responses to technology along a positive-to-negative spectrum: Over-Enthusiastic, Confident, Content, Cautious, Complacent, Dismissive, Fearful, and Malicious.

I grouped my characters into four roles: Discoverer, Innovator, User, and Critic. Some of my stories involve people discovering lost technologies or tech developed by departed aliens, so I had to include that role. The other roles should be obvious.

The resulting map shows many of my published stories, indicated by two-letter abbreviations of their titles. Where a single story occupied two areas, I connected them with a line.

Details of the map aren’t important, but you can tell a couple of things at a glance. First, I’m nowhere close to covering the whole map. I’ve concentrated on the Operating Prototype and Widespread Adoption stages more than the others.

Second, innovators view technologies positively and critics negatively (duh), while users tend to view technology negatively in the early stages and more positively in the later ones.

As far as map coverage goes, I wonder if the Operating Prototype and Widespread Adoption stages provide more opportunity for dramatic stories than the other stages.

Has anybody studied technology in fiction using a similar method? I can imagine a map with hundreds of colored points on it, representing an analysis of hundreds of science fiction stories. It would be fun to see how my stories stack up against those of other authors.

In the meantime, I’ll continue to write. As more of my stories get published, perhaps you’ll see future versions of this map, updated by—

Poseidon’s Scribe

Talkin’ ‘Bout My Generation Project

Humanity just doesn’t go in for long-term projects anymore. The fire at Notre-Dame de Paris cathedral this past Monday got me thinking about projects that extend beyond a single human lifetime.

The French are determined to repair their beloved medieval church. Estimates of the duration of repairs range from five to twenty years or more. Those timeframes would have astounded the laborers who built it. They needed 182 years to finish the cathedral.

That sort of project duration was typical for cathedrals of the period. It seems we’re no longer accustomed to ‘generation projects.’ We’re used to completing large structures (buildings, dams, tunnels, bridges, etc.) in spans of less than thirty years.

Imagine what it took to build something that required centuries. The original planners, designers, and workers knew they’d never see the completed work. The designers passed on their plans to others, and hoped the enthusiasm for the project would carry through. Laborers in the middle years worked on a project they didn’t originate and knew they’d never finish. Only the final generation of workers lived to enjoy the project’s culmination.

As an engineer with some program management experience, I marvel at such long-term projects. As a fiction writer, I try to understand the motivation behind them. How did builders sustain the guiding vision generation after generation? Let’s explore some historical generation projects, proceeding from most recent to oldest.

Temple Expiatori de la Sagrada Família. When finished, this will be a Roman Catholic Church in Barcelona, Spain. Begun in 1882, the project encountered difficulties including war and fire that delayed it, though it’s due to complete in 2026, fully 144 years after its start.

Saint Basil’s Cathedral. Begun in 1555 in Moscow, this church took about 123 years to complete in 1678.

St. Peter’s Basilica. This Italian Renaissance church stands in Vatican City. Construction began in 1506 and ended in 1626, 120 years later. Construction delays included difficulties with its immense dome and a succession of architects redesigning it, among them Michelangelo and Raphael.

Leaning Tower of Pisa. This cathedral bell tower in Pisa, Italy was doomed from the start of its construction in 1173, as it stood on unstable subsoil and started to lean. The difficulty of compensating for that lean was only one of the factors delaying its construction. War with other Italian city-states was another. Despite these setbacks, builders completed the project after 199 years, in 1372.

Notre-Dame de Paris. The fire on April 15 reminded us all that all of humanity’s creations are subject to damage, and fire is perhaps the biggest threat to wooden structures. Construction of this medieval Catholic Cathedral began in 1163, and was mostly done by 1260, but modifications continued until 1345, a total of 182 years.

Angkor Wat. According to one source, the building of Angkor Wat (in what is now Cambodia) began in 802 in the Khmer Empire and completed in 1220, taking 418 years. It started as a Hindu temple and later became a Buddhist one.

Temple of Kukulcan. Also called El Castillo, this Mayan step pyramid, built as a temple to the god Kukulcan, stands in the ancient city of Chichen Itza in what is now Mexico. Construction started in the year 600 and continued in phases to 1000, a duration of 400 years.

Great Wall of China . On my list of generation projects, the Great Wall boasts the longest duration. One site dates its start as 400 B.C. and its completion as 1600 A.D., or two millennia. Ordered by the emperors of various dynasties including the Qin, Han, Qi, Sui, and Ming, the guiding vision seems to have been protection against raiders from the northern steppes.

Stonehenge. Now we come to the oldest generation project on my list, a Neolithic structure in England begun around 3100 B.C. and completed around 1600 B.C. The builders left no records, and the structure’s purpose is unknown. Theories include a burial site, an astronomical observatory, ancestral worship, a symbol of peace and unity, and a place of healing.

From the above list, we can see that, with the exception of the Great Wall and possibly Stonehenge, religion provides a strong motivation for embarking on and sustaining a long-term project. Also, it’s generally true that these projects took a lot longer than originally planned, encountering various disruptions and delays along the way.

If we graph the timeline of these generation projects, it’s clear the timeframes are shortening, likely a result of advancing construction techniques and laborsaving machinery.

Given the faster pace of modern construction, have we lost the ability to plan and accomplish long-term projects? Could we sustain the enthusiasm of a building project over centuries, as our ancestors did?

If we desire to build megastructures on a planetary or stellar scale someday, things such as terraformed planets, Shellworlds, Niven Rings, Dyson Spheres, and others, it’s likely we’ll have to reacquire the multi-generational mindset of those who came before us.

To sustain a project of that type we’d need a motivating spirit, a shared vision as powerful as the ones (like religion or protection) that inspired our predecessors.

Alternatively, we could work on extending the human lifespan. A career length of two thousand years, sufficient to oversee the entirety of the Great Wall, seems like a fine notion to—

  Poseidon’s Scribe

Technoethics and the Curious Ape

In the movie Jurassic Park, the character Ian Malcolm says, “Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.” Today, I’m focusing on another technology topic, namely ethics in technology, or Technoethics.

Wikipedia article “Ape”

Our species is innately curious and inventive. We possess large brains and opposable thumbs, but lack claws, shells, great speed, camouflaged skin and other attributes employed by animals to attack prey or to avoid becoming prey. These circumstances make us natural toolmakers.

From the beginning, we found we could use our tools for good or evil. The same stick, spear, bow and arrow, or rifle we used to kill a rabbit for dinner could also kill a fellow human. The different outcome is not inherent in the tool, but in the heart of the person employing it.

For each new technology in our history, there was at least one inventor. This person took an idea, created a design, and often used available materials to assemble the new item. Were these inventors responsible for, in Malcolm’s words, stopping to think if they should?

With some technologies, like the plow, the printing press, the light bulb, and the automobile, it’s certain their creators intended only positive, beneficial outcomes. The inventor of the automobile could not have foreseen people using cars as weapons, or that one day there’d be so many cars they’d pollute the atmosphere.

With other technologies such as the spear, the warship, the canon, and the nuclear bomb, the inventor’s intent was to kill other people. Why? The usual rationale is twofold: (1) My side needs this technology so our wartime enemy does not kill us, or (2) If I do not invent this technology first, my enemy will, and will use it against my side. Given such reasoning, an inventor of a weapon can claim it would be immoral not to develop the technology.

I’m sure there are unsung examples of would-be inventors refusing, on ethical grounds, to develop a new technology because they feared the consequences. The only example I can think of, though, is Leonardo da Vinci. Although he had no qualms about designing giant crossbows and battle tanks, he drew the line at submarines. Though at first excited about giving a submarine design to the Venetians for use against the Turks, da Vinci reconsidered and destroyed his own plans, after imagining how horrible war could become.

That example aside, the history of humanity gives me no reason to suspect future inventors will hesitate to develop even the most potent and powerful technologies. It’s our curious ape nature; if we can, we will. Only afterward will we ask if we should have.

As a writer of technological fiction, I’ve explored technoethics in many of my stories:

  • In “The Sea-Wagon of Yantai,” a Chinese submarine inventor intends his craft as a tool of exploration, but an army officer envisions military uses.
  • In “The Steam Elephant,” a British inventor sees his creation as a mobile home for safari hunters, but then imagines the British Army employing it on the battlefield. Only the narrator character fears what war will become when both sides have such weapons.
  • In “Leonardo’s Lion,” da Vinci actually builds his inventions, but hides them away and gives clues to the King of France about where to find them. The King never sees the clues, but decades later a ten-year-old boy does, and must decide whether the world is ready for these amazing devices.
  • In “The Six Hundred Dollar Man,” a doctor imagines how steam-powered prosthetic limbs would have saved crippled Civil War soldiers, but fails to foresee how super-strength and super-speed could turn a good person bad.
  • In “Ripper’s Ring,” a troubled Londoner in 1888 comes across the Ring of Gyges that Plato wrote about, an invisibility ring. Possession of that ring changes him into history’s most famous mass murderer.
  • In “After the Martians,” the survivors of an alien attack in 1901 take the Martian technology (tripods, heat rays, flying machines) and fight World War I.

As we smart apes start playing with bigger and more deadly sticks, maybe one day we will stop and think if we should before we think about whether we can. Hoping that day comes soon, I’m—

Poseidon’s Scribe