My Top Ten Favorite Movies that Take Place on an Airplane

I enjoy movies that take place in a confined area such as a boat, a train, an Artic outpost, or an airplane. The space constraint forces writers, directors, and actors to be clever and creative.

Although many movies have a few scenes that take place on an airplane, there aren’t all that many movies where the majority of the story takes place inside a plane. Here are my 10 favorites.


1. Air Force One (1997) – Gary Oldman plays a Russian terrorist who hijacks the U.S. President’s (Harrison Ford) Air Force One jet. Excellent combination of plot twists, action, and acting. I’ll bet you’ve seen at least parts of this movie several times on TV. The catch-phrase scene is, “Get off my plane!”


2. Red Eye (2005) – A young hotel manager woman (played by actress Rachel McAdams) who is travelling from Dallas to Miami accidentally meets a man (Cillian Murphy) in an airport. After she boards the plane, she finds that he is seated right next to her. What a coincidence! Or is it? There’s an assassination plot afoot, and a lot of tension as McAdams’ cell phone battery is on the verge of dying.


3. Non-Stop (2014) – When will criminals learn not to mess around with Liam Neeson? Neeson plays a sky marshal on a flight from New York to London. In mid-flight he receives a text message on his special secure phone that says a passenger will die every 20 minutes until a ransom of $150 is paid. People start dying but everyone thinks Neeson is part of the scheme. Who among the many passengers and crew is behind the plot?


4. Executive Decision (1996) – This movie came out a year before “Air Force One” and I sometimes get the plots of the two somewhat similar films confused. In this movie, terrorists hijack a flight from Greece to Washington, DC to force the release of a jailed terrorist leader. A DARPA engineer, played by Kurt Russell, and a team of commandos, sneak on board the flight using a modified F-117 stealth jet. Halle Berry plays a flight attendant.


5. Flightplan (2005) – Jodie foster plays a woman who boards a jumbo jet in Berlin with her young daughter, to take the body of her husband (who died mysteriously) to the United States. Shortly after boarding, the daughter disappears and everyone on the plane says the daughter never existed. Is Foster going insane? Or is there a sinister conspiracy of some kind?


6. Airport (1970) – A suicidal man boards a Boeing 707 jet flying out of Chicago. This movie, taking place in the very early days of commercial jet travel, had a lot of very well-known actors including Burt Lancaster, Dean Martin (a pilot), Jacqueline Bisset, George Kennedy, Van Heflin, and Lloyd Nolan. The film was a huge success and spawned three sequels and many disaster movies in the 1970s, such as “The Poseidon Adventure”, “The Towering Inferno”, and “Earthquake”.


7. The High and the Mighty (1954) – This movie isn’t as well-known as the others on my list, but it was perhaps the first on-a-plane movie with a big budget and a major star. John Wayne is the pilot of DC-4 passenger plane flying from Honolulu to San Francisco. An engine failure and fuel loss occur at the worst possible point in the flight. Will the plane make it safely to San Francisco? I suspect this film, based on the 1953 novel of the same name, was the direct inspiration for “Airport” (1970).


8. Memphis Belle (1990) – The movie takes place on board a World War II B-17 plane in a bombing raid over Nazi Germany. A very exciting film. My favorite performances are by actors Billy Zane (the bombardier), and Courtney Gains (the right waist gunner). I’m always stunned when I think of the unbelievable bravery and courage of the men who fought in WWII.


9. Airport 1975 (1974) – This is the first sequel to “Airport” (1970) and unlike many movie sequels, is very good. A Boeing 747 jumbo jet is flying from Washington DC to Los Angeles when a small private plane crashes into the 747’s cockpit, killing the first officer (Roy Thinnes) and the flight engineer (Erik Estrada), and rendering the pilot (Efrem Zimbalist, Jr.) blind and unconscious. Charlton Heston manages to get on board the jet while in flight, by being lowered by a fast helicopter.


10. Sky Dragon (1949) – On a flight from Honolulu to San Francisco, all passengers, including Charlie Chan (played by Roland Winters) and his son Lee (Keye Luke) are mysteriously rendered unconscious. When everyone wakes up, they find that $250,000 is missing from a courier. Will the two Chans solve the mystery? This was the last of the Chan series.



Honorable Mention

The Horror at 37,000 Feet (1973) – This is a made-for-TV film and it really made an impression on me the one time I saw it. Demonic forces are on board a Boeing 747 flight from London to New York. Stars included Chuck Connors as the pilot, Buddy Ebsen, France Nuyen, William Shatner as an ex-priest who tries to defeat the evil using religion. Bad idea.


Nightmare at 20,000 Feet (1959) – This is one of the best-known (and in my opinion, best stories) of the old Twilight Zone TV series. William Shatner plays a character just released from a sanitarium who is on a passenger flight on a dark and stormy night. He thinks he sees a creature on the plane’s wing . . .


Snakes on a Plane (2006) – Do you really need an explanation?


Airplane! (1980) – Many movie critics and movie fans consider this one of the best film comedies of all time. It certainly does have some hilarious moments, and influenced the film comedy genre ever since.


.

Advertisements
Posted in Top Ten | Leave a comment

Image Classification Using Keras

I wrote an article titled “Image Classification Using Keras” in the December 2018 issue of Visual Studio Magazine. See https://visualstudiomagazine.com/articles/2018/12/01/image-classification-keras.aspx.

Keras is a neural network library. Keras actually is a layer of abstraction on top of the TensorFlow library. The idea is that TensorFlow operates at a low level and is very difficult to use. Keras gives developers a much easier-to-use interface for creating deep neural networks.

In my mind, there are five basic types of problems that are well-suited for neural networks: multiclass classification, binary classification, regression, CNN image classification, and LSTM sentiment analysis.

In my article, I show how to use Keras to create a prediction model for the well-known MNIST image data set. Each image has size 28 pixels by 28 pixels. Each pixel is a grayscale value between 0 and 255. Each image is a hand-drawn digit, ‘0’ through ‘9’. The goal is to read in a set of 28×28 = 784 pixels and predict the digit that’s represented.

My demo program uses a convolutional neural network (CNN) which is a clever architecture that looks at small areas of an image, rather than at the entire image as a whole. This gives a better prediction model, scales well to larger images with possibly millions of pixels, and is resistant to small changes in the positioning of the image (“shift invariance”).

The MNIST dataset has a total of 70,000 images and is divided into a training set (60,000 images, 6,000 of each digit), and a test set (10,000 images).

An interesting aspect of image classification that I didn’t have time to discuss in my article is that even though CNN networks were designed specifically for image classification, CNNs are being used with great success is other problem scenarios.



Keras means horn in Greek. Left: I love the Horatio Hornblower historical fiction novels by C.S. Forester. Left Center: The 1951 movie “Horatio Hornblower” starring Gregory Peck is very good. Right Center: The British series of TV movies from 1998 to 2003, starring Ioan Gruffudd, are also excellent. Right: Author Alexander Kent wrote a similar series of excellent novels in the 1970s. The themes of courage in the face of adversity, loyalty and friendship, patriotism, and resilience are not corny in my mind – they’re important principles to try and honor (as best as possible).

Posted in Keras | Leave a comment

Introduction to PyTorch on Windows

I wrote an article titled “Introduction to PyTorch on Windows” in the January 2019 issue of Microsoft MSDN Magazine. See https://msdn.microsoft.com/en-us/magazine/mt848704.

Among my colleagues, the most commonly used neural network libraries are TensorFlow, Keras, CNTK, and, increasingly, PyTorch. I like all these libraries but I find myself using PyTorch more and more often. Like all the libraries, PyTorch has a non-trivial learning curve. But once you get over the initial hurdles, PyTorch has a very nice feel to it (and don’t ask me to explain what I mean by that because I can’t).

Here’s a screenshot of the demo program for the article:

The demo program reads the well-known Iris dataset into memory. The goal is to predict the species of an Iris flower (setosa, versicolor or virginica) from four predictor values: sepal length, sepal width, petal length and petal width. A sepal is a leaf-like structure.

The complete Iris dataset has 150 items. The demo program uses 120 items for training and 30 items for testing. The demo first creates a neural network using PyTorch, then trains the network using 600 iterations. After training, the model is evaluated using the test data. The trained model has an accuracy of 90.00 percent, which means the model correctly predicts the species of 27 of the 30 test items.

The demo concludes by predicting the species for a new, previously unseen Iris flower that has sepal and petal values (6.1, 3.1, 5.1, 1.1). The prediction probabilities are (0.0454, 0.6798, 0.2748), which maps to a prediction of versicolor.

In the last paragraph of my article, I give an opinion:

A common question is, “Which neural network library is best?” In a perfect world you could dedicate time and learn all the major libraries. But because these libraries are quite complicated, realistically most of my colleagues have one primary library. In my opinion, from a technical point of view, the three best libraries are CNTK, Keras/TensorFlow and PyTorch. But they’re all excellent and picking one library over another really depends mostly on your programming style and which one is most used by your colleagues or company.



PyTorch – torch – tiki torch – hula – hula girl lamp. Hula girl lamps were first popularized in the late 1920s. They were made by the Dodge Company (not the auto maker) — the company that made the first Oscar statues for the motion picture Academy Awards, and still makes them today. Some of the old lamps are very valuable.

Posted in PyTorch | Leave a comment

Anomaly Detection Using a Deep Neural Autoencoder

Anomaly detection is the process of finding unusual data items. One standard approach is to cluster the data and then look at clusters with very few items, or at items that are far away from their cluster mean/average. Unfortunately, in most cases clustering works only with strictly numeric items (there are a few exceptions).

What do you do if your data is non-numeric or mixed numeric and non-numeric? I decided to explore an idea I’d seen in a couple of recent research journals: Create a deep neural autoencoder and then look at those items that the encoder has the most trouble with.

I zapped up a quick demo, and the idea seems to work. I used the UCI Digits Dataset. There are 1797 items. Each item has 64 predictor values which represent an 8×8 handwritten digit. I used PyTorch to create a 64-32-16-4-16-32-64 deep neural autoencoder. An autoencoder is a neural network that predicts its own input values.

One of the key ideas here is that unlike standard clustering, a neural autoencoder can deal with both numeric input and non-numeric input (that is encoded using 1-of-[N-1] or one-hot encoding).

After training the autoencoder, I walked through the 1797 items and found the one that gave the autoencoder the most trouble, meaning the item that gave the highest error between the 64 input pixel values and the 64 predicted pixel values.

It turned out that item [1113] had the highest error. It was a handwritten ‘7’. I displayed item [1113] and sure enough, it looked like an anomaly (meaning it didn’t look very much like a ‘7’).

A different approach (which I didn’t have time to explore) is to use an autoencoder architecture that has 64 nodes in the central hidden layer, then after training, when fed input values, that 64-value vector is a purely numeric representation of the input. Because the representation is numeric, k-means clustering can be applied. Then, after clustering it’s possible to find clusters with very few values, or find values in clusters that are far away from their cluster mean. A lot of ideas there.

Anomaly detection is a very difficult problem, but my experiment suggests that a deep neural autoencoder has good potential for tackling anomaly detection.



I don’t smoke and don’t think smoking is healthy, but I do find some ornamental cigarette case art from the 1920s interesting and beautiful — a personal anomaly I suppose.

Posted in Machine Learning, PyTorch | Leave a comment

NFL 2018 Week 20 (Conference Championships) Predictions – Zoltar Barely Likes the Saints and the Patriots

Zoltar is my NFL prediction computer program. It uses a deep neural network and reinforcement learning. Here are Zoltar’s predictions for week #20 of the 2018 NFL season (third weekend of playoffs):

Zoltar:      saints  by    4  dog =        rams    Vegas:      saints  by  3.5
Zoltar:      chiefs  by    2  dog =    patriots    Vegas:      chiefs  by    3

Zoltar theoretically suggests betting when the Vegas line is more than 3.0 points different from Zoltar’s prediction. For week #20, Zoltar has no solid hypothetical suggestions.

But if Zoltar were forced to give advice, he’d say bet on the Vegas favorite Saints over the Rams, and bet on the Vegas underdog Patriots against the Chiefs. Zoltar thinks the Saints will win by 4 points and cover the 3.5 point spread. Zoltar thinks the Chiefs will win by 2 points and therefore not cover the 3.0 point spread.

Theoretically, if you must bet $110 to win $100 (typical in Vegas) then you’ll make money if you predict at 53% accuracy or better. But realistically, you need to predict at 60% accuracy or better.

Just for fun, I track how well Zoltar does when just trying to predict just which team will win a game (not by how many points). This isn’t useful except for parlay betting.

Zoltar sometimes predicts a 0-point margin of victory. There are no such games in week #20. When there are such games, in the first four weeks of the season, Zoltar picks the home team to win. After week number four, Zoltar uses historical data for the current season (which usually, but not always, ends up in a prediction that the home team will win).

==

Zoltar did OK last week. Against the Vegas point spread, which is what Zoltar is designed to do, Zoltar went 1-0 by correctly liking (barely) the Vegas underdog Eagles against the Saints. The Saints won 20-14 and didn’t cover the 9.0 Vegas point spread.

For the regular season, against the Vegas spread, Zoltar went 49-28 which is about 63% accuracy. Including the first two playoff weekends, Zoltar went 51-28 (about 64% accuracy).

Just predicting winners, Zoltar was a perfect 4-0. Vegas was also 4-0 as all favorites won quite easily. For the regular season, just predicting which team will win, Zoltar was 173-81 (about 68% accuracy) and Vegas was 167-85 (about 66% accuracy).



My system is named after the Zoltar fortune teller machine you can find in arcades. A couple of seasons ago, I created a little Zoltar machine (about 8 inches tall) that had a speech interface. I also experimented with predicting college basketball scores with “Zoltara”, but I didn’t really have enough time to explore deeply.

Posted in Machine Learning, Zoltar | Leave a comment

I Give a Talk About Neural Regression Using PyTorch

I work at a large tech company. One of the things I do at work is present short (about an hour) talks on machine learning and artificial intelligence topics. A few days ago I gave a talk on performing regression using a neural network, with the PyTorch library.

A regression problem is one where the goal is to predict a numeric value. I used one of the most common datasets, the Boston Housing dataset. There are 506 data items. Each item represents a town near Boston. The goal is to predict the median house price in a town, using 13 predictor variables.

The predictor variables are: The first three predictors, [0] to [2], [0] = are per capita crime rate, [1] = proportion of land zoned for large residential lots, [2] = and proportion of non-retail acres.

Predictor [3] is a Boolean if the town borders the Charles River (0 = no, 1 = yes). Briefly, the remaining predictors are: [4] = air pollution metric, [5] = average number rooms per house, [6] = proportion of old houses, [7] = weighted distance to Boston, [8] = index of accessibility to highways, [9] = tax rate, [10] = pupil-teacher ratio, [11] = measure of proportion of Black residents, and [12] = percentage lower socio-economic status residents.

I briefly talked about two approaches to normalizing the numeric predictor values. The simplest approach is to drop the data into Excel, normalize, then save the normalized data as a text file. The second approach is to programmatically normalize the data. The simple Excel approach has the minor downside that when you want to make a prediction after training, you have to normalize predictor values offline.

Even though regression is quite simple, I didn’t have enough time to discuss the entire program. Machine leaning is incredibly fascinating but there are lots and lots and lots of details.



A couple of humorous observations related to details by one of my favorite cartoonists, Jim Unger (1937-2012).

Posted in Machine Learning, PyTorch | Leave a comment

The 2019 Visual Studio Live! Conference is Coming to Las Vegas

Visual Studio Live! is one of my top three favorite conferences for software developers who use Microsoft technologies. You should consider the possibility of attending. The event runs March 3-8, 2019 and will be at Bally’s Hotel in Las Vegas. See https://vslive.com/events/las-vegas-2019/home.aspx.


From the main VS Live! Web site.

OK, so why am I promoting this event? Do I get a kickback of some sort? No. I recommend VS Live! because I think it’s a good event. OK, but what do I mean by that?


Here’s a picture of the conference lobby area from last year’s event in Vegas. The VS Live! events tend to be a bit smaller than other conferences which allows more interaction among attendees and speakers. My Windows Phone (I think I was one of about 17 people who still had one in 2018) camera had a cracked lens. I have a new phone now. I miss my old phone!

First of all, conferences like this are expensive. But VS Live! delivers good value for the money in my opinion. It’s usually not feasible to pay for such an event out of pocket, but many companies will fund your attendance as part of training. The conference Web site has a Sell Your Boss page at https://vslive.com/events/las-vegas-2019/information/sell-your-boss.aspx.

The second reason I recommend VS Live! is somewhat subjective. The people who run the conference, Brent and Danielle, are good people. By that I mean I can vouch for their character and integrity. And one of my few guiding principles in life is to associate with good people and good things will happen. I think this will be the 26th year for VS Live! which is amazing and a testament to the event’s quality.

The third reason I recommend VS Live! is a bit vague. Whenever I go to VS Live! I gain all kinds of practical technical information that I can put to use in my job. But additionally, I always pick up interesting and useful semi-technical information, such as trends, in ad hoc impromptu conversations with other attendees. And when I return to work, I do so with renewed energy and enthusiasm. And I know I’m more productive.


Last year I gave a talk about Azure Machine Learning Studio. My same cracked lens.

Finally, the March 2019 conference is in Las Vegas! A truly fascinating city if you like mathematics, psychology, marketing, or people-watching.


In addition to regular technical talks, there are other events such as panel discussions and lunch birds-of-a-feather gatherings. Someone else took the photo for me.

The VS Live! people sent me a message that said if you use a special code of SPKLV24, you can get a $400 discount from the 5-day or 6-pay package. That’s sounds like a good deal.

In the end, only you can decide if VS Live! makes sense for you. So, check out the Web site and look the agenda over. By the way, there are multiple VS Live! events in different cities, including Dallas, New Orleans, Boston, Redmond, San Diego, Chicago, and Orlando, so if you can’t make it to Vegas, maybe you can go to a different event. See https://vslive.com/home.aspx.



Yesterday, my friend Kent told me about a “secret” bar he visited recently in Las Vegas. There’s a regular bar called the Commonwealth in the Downtown Fremont Street area. But there’s also a small hidden speakeasy style bar named The Laundry Room inside the Commonwealth. Left and center: the Commonwealth. Right: The Laundry Room.

Posted in Conferences | Leave a comment