My Top Ten Scary Science Fiction Movies

I’m a big fan of science fiction movies. Here are my top ten favorite scary sci fi movies.


1. Starship Troopers (1997) – The alien bugs in this movie are really scary. The part where the bugs attack “Outpost Whiskey” was particularly tense and had me on edge. I think the scariness comes, in part, from the sheer numbers of aliens attacking, and knowing the bugs have no feelings and show no mercy.


2. Invaders from Mars (1953) – Late at night, during a thunderstorm, a boy wakes up, looks out his window and thinks he sees a flying saucer land in the sand pitnear his house. Was he dreaming? I saw this movie when I was young, and I literally had nightmares about it for the next 25 years. Don’t go near the sand pit!


3. Alien (1979) – I saw this movie, somewhat by accident, on the first day it was released. In 1979 there was no Internet, no social media, no cable TV. This movie was released in total secrecy. My friends and I were driving around Orange, California, and we saw the theater sign for “Alien”. What the heck we thought, so we went in. We were all scared silly. I remember when the movie ended, nobody in the theater moved — we were all stunned. Great movie!


4. Deep Blue Sea (1999) – A genetically engineered, super intelligent shark. What could go wrong? The first 20 minutes of the movie, I was thinking, ho hum. But when the character played by Samuel Jackson met a quick and surprising end, I knew this movie was going to be special. I liked the scene where one of the survivors, who was about to be airlifted out on a stretcher but doesn’t because the helicopter crashed (I didn’t pay attention at the time that the survivor had been given an oxygen mask) makes a horrifying reappearance.


5. The Thing (1982) – Researchers are isolated at the South Pole. An alien that can assimilate other creatures. If you’ve seen the film you probably remember the blood test scene. This film has a very confusing plot and a somewhat ambiguous ending, but it was very scary. Great special effects that hold up well over 30 years later.


6. Jeepers Creepers (2001) – When I first saw this movie, I realized right away I’d never seen anything like it. The pursuit of the brother Darry played by Justin Long and the sister Trish played by Gina Philips was incredibly tense. There are many scenes in this movie that had me tightening up with tension.


7. Godzilla (1956) – The original (well, original American adaptation) of Godzilla bears no resemblance to the later spoofs. The original was played deadly seriously, and if you can suspend disbelief (as I can for sci fi movies), Godzilla is a very scary movie. I had nightmares about the scene where the scientists and villagers are hiking up the grassy hill and Godzilla suddenly appears over the crest of the hill.


8. Predator (1987) – This movie is one of the best of the genre where a group of people (mercenaries led by Arnold Schwarzenegger in this case) are picked off one by one by something bad (the Predator). Who will be the next person to die?


9. Cube (1997) – A group of people wake up in a metal cube about 20 feet by 20. None has any memory of how they got there. Each wall, floor and ceiling have a small door that leads to . . . another cube room. Some of the rooms are safe to enter, others . . . not so safe. Particularly gruesome was the room with a grid of thin metal wires that came sweeping across like a giant cheese cutter. Ugh.


10. Pitch Black (2000) – I actually prefer the 2004 sequel, The Chronicles of Riddick, but Pitch Black is scarier. A group of spaceship crash survivors, including Vin Diesel, try to evade various nasty creatures on a scary planet during an eclipse.


Posted in Top Ten | Leave a comment

Microservices – Move Along, Nothing New Here

Technology is subject to fads, fashions, and buzzword-of-the-day just like most other things. Recently, a colleague of mine asked me for my opinion of microservices. It took me a moment to diplomatically answer along the lines of, “A microservice is just a small Web service function — nothing more.”

I remember when Service Oriented Architecture first appeared in the early 2000s. It seemed exotic and mysterious. In SOA you create a Web service that can accept a query from an application, for example http: //mywebservice.com/adder?x=3&y=5. The service could do something, like perhaps add x=3 and y=5, and return the sum of 8 (probably in XML or JSON format) to the application.

One of the original design goals of SOA was to create a technology where all kinds of different systems could talk to each other. For example, if a large company had Linux systems and Window systems and mainframe systems, you could put a Web service on top of each system.

The universal technology-glue scenario is pretty powerful. Additionally, Web services are useful to fetch proprietary data, or to perform processing on a very powerful machine.

I took a look at creating a microservice on top of Microsoft’s Azure platform. It was pretty slick — significantly faster and easier than spinning up a Web service from scratch (which I’ve done many times), but of course there’s the expense of having to explicitly pay for Azure.

OK, so what does all this have to do with the latest and greatest microservices? Well, someone thought that it’d be cool to slap a new name around SOA. If you search the Internet for “microservices vs. SOA” you’ll find all kinds of nonsense where self-proclaimed experts try to create ideas where none exist.

The moral: a microservice is just a small SOA function. I’m waiting for someone to come up with the notion of a “miniservice” and a “macroservice” and a “metaservice” and who knows what else, and sell training classes on how to understand them. (I hope you sense the sarcasm here).

Posted in Miscellaneous | Leave a comment

Machine Learning for Sports Prediction

Ever since my college days, I’ve been interested in using machine learning for sports prediction. However, among my research and engineering colleagues, I don’t run into very many people who share that interest.

About a year ago, I became acquainted with Bryan. Bryan contacted me because he’d read about my Zoltar prediction system for American NFL football. Bryan had created a prediction system for NCAA American college basketball.

Bryan and I decided to see if we could find other engineers who shared our interest in the intersection of machine learning and sports. So we organized an informal get-together. The idea was to have anyone who was interested show up to an empty room during the lunch break at the 2017 Microsoft “Machine Learning, Analytics, and Data Science” conference in June (an internal event for employees only).

So we sent out an email message that said basically, “Show up if you’re interested and we’ll sit around and chat.” Bryan and I had no idea of what would happen — we could well be sitting in an empty room, just the two of us, staring at each other.

Well, that’s not what happened. At 12:00 noon, the room quickly filled to capacity — probably about 220 people. Our original plan of just having everyone chat was out of the question so I did a quick 5-minute talk about the three research journals, and the four conferences, related to sports technology. Bryan did a 5-minute talk on his basketball prediction system.

So we found some people who are interested in sports and machine learning. But I just don’t know how to make good use of this interest. To do something during work time, we’d have to get sign-off from senior leadership, and presumably that would only happen if the activity benefited our company. To do something outside of work hours (such as my Zoltar and Bryan’s basketball system), is really tough because nobody has any time at all outside of work hours.

Some possibilities running through my mind are to schedule a talk every few weeks (from me, Bryan, and anyone else) to keep interest alive, or maybe try to organize a micro-conference (either standalone or attached to an existing conference), or try to engage with an external company (maybe a sports data collection company or a fantasy sports company), or, . . . well, I just don’t know. Dang! I know there’s something good that could happen, but I can’t quite figure out what is it.

Posted in Conferences, Machine Learning | 2 Comments

Long Short-Term Memory Network using CNTK 2.0

The Microsoft CNTK v 2.0 is a code library for deep neural networks. The library is written in C++ but you use CNTK by writing a Python program that calls into the CNTK functions.

I took a look at the documentation example for a long short-term memory (LSTM) recurrent neural network. When I want to understand a library, my first step is to get a program to run. After that, I can start experimenting with the program. That’s the most effective approach for me anyway.

The CNTK example was written as a IPYNB (interactive Python notebook) file. I much prefer ordinary Python programs, so I set out to refactor the notebook code to plain Python.

It took me about an hour but I eventually got the LSTM network demo running. The first part of the example generates data using the sine function:

The second part of the example creates, trains, and evaluates a LSTM prediction model:

OK, so I got the example running, but at this point I have pretty much zero understanding of what’s going on. But I’ve made the first step.

Posted in CNTK, Machine Learning | Leave a comment

“R Programming Succinctly” – Free e-Book

A book I wrote, “R Programming Succinctly” was published on June 5, 2017. See https://www.syncfusion.com/resources/techportal/details/ebooks/R-Programming_Succinctly. The R language is, arguably, the default, open source language for Data Science. My e-book describes how to write R language programs, as opposed to using R language functions in an interactive mode.

The book is available as a free PDF, or in free Amazon Kindle format.

Free? Yes. The Syncfusion company has created a library of free e-books. There’s no catch except that you have to supply an e-mail address when you register. Syncfusion will send you some advertising every now and then, but they’re very restrained (a message only about once a month or so) and quite a few of their messages are useful and interesting.

The Table of Contents is:

Chapter 1 explains how to install R and write and execute R programs. Chapter 2 explains the differences between R vectors, lists, arrays, matrices, and data frames — data structures that are quite different from those used in other languages.

Chapter 3 carefully explain how to use the wildly different R OOP models: list-based, R3, R$, and RC. Chapter 4 illustrates how to create custom R libraries. Chapter 5 shows how to code program-defined random number generators, neural networks, and bee colony optimization.

If you use R, you should find “R Programming Succinctly” quite useful.

Posted in Machine Learning, R Language | Leave a comment

“AI for Everyone” at the 2017 Microsoft MLADS Conference

The 2017 Microsoft Machine Learning, Analytics, and Data Science (MLADS) conference was held from Wednesday, June 7 through Friday, June 9, on the Microsoft campus in Redmond, WA. MLADS is an internal event open only to employees.

There were about 100 sessions (talks, tutorials, workshops). Examples include “Scalable Machine Learning with Spark an R”, “Fine-Grained Sentiment Analysis for Better Customer Feedback Analysis”, and “Deep Learning for Machine Reading Comprehension”.

In January of 2017, Microsoft Executive Vice President Harry Shum announced the creation of the Microsoft AI School. It’s an internal organization that has a mandate to spread knowledge of machine learning and artificial intelligence throughout the company.

The AI School team delivered a session titled “AI for Everyone”. I acted as one of the hosts, and as one of the technical experts for the question and answer session following the talk (along with Ken and Roland). The two main speakers were Rick and Ani. Rick has a unique perspective while Ani is more traditional in his approach. Rick talked about the history of AI and gave a broad overview. Ani explained simple neural networks.

I estimate there were approximately 150 people attending the session. There is tremendous energy, enthusiasm, and thirst for knowledge about machine learning and AI. It’s a very exciting time to be a researcher or engineer who works with ML/AI.





Posted in Conferences, Machine Learning | Leave a comment

Neural Network Batch Training – Accumulate the Gradients or the Deltas?

Even though neural networks have been studied for decades, there are many issues that aren’t well understood by the engineering community. For example, if you search the Internet for information about implementing batch training, you’ll find a lot of questions but few answers.

In pseudo-code, there are two alternatives. One approach accumulates all deltas (increments) for the weights, and then updates:

loop maxEpochs times
  for-each training item
    compute gradients for each weight
    use gradients to compute deltas
      for each weight
    accumulate the deltas
  end-for
  use accumulated deltas to update weights
end-loop

The second approach accumulates all gradients for the weights, and then updates:

loop maxEpochs times
  for-each training item
    compute gradients for each weight
    accumulate the gradients
  end-for
  use accumulated gradients to compute
    the deltas, and then update weights
end-loop

So, which approach is correct? Do you accumulate the deltas or the gradients? The answer is that both approaches are equivalent. The delta value for a weight connecting two nodes is a “learning-rate” times the gradient associated with the weight: delta[i,j] = learnRate * gradient[i,j]. So, mathematically, it doesn’t matter if you accumulate the weight deltas or the weight gradients.

I wrote some code to demonstrate. My demo generates 1,000 synthetic items where each item has four input (feature) values and three output (class) values. I wrote two different methods that train a NN, one that accumulates weight deltas and one that accumulates weight gradients. The results were identical.

This discussion is all about “full-batch” training, where all training items are processed before doing any weight updates. The issues are the same for mini-batch training where you process a chunk of training items at a time. However, for “stochastic”, or “online” learning, because you update weights after each training item is processed, there’s no issue about accumulating deltas or gradients.

Posted in Machine Learning