Introducing GMinder for Google Calendar

GMinder is a reminder program that waits in your system tray and
alerts you when you have an upcoming Google Calendar event. GMinder
supports multiple calendars and allows you to configure how you want to be
alerted. Since it downloads your events, it works offline and enables
you to preview your agenda of events.

GMinder’s main reminder window
GMinder-Reminder

Preview your agenda
GMinder-Preview

Check it out on the project page!

Sequentially Accessing a Rectangular Array

We know that .NET performs optimizations when accessing rectangular arrays, but for sequential access should the inner loop be on the first or second index? Is there even a difference?

The Code

using System;
class Program
{
    static void Main(string[] args)
    {
        int size = 512;
        int count = 1000;
        int[,] array = new int[size, size];
        int total = 0;
        var watch = System.Diagnostics.Stopwatch.StartNew();
        for (int i = 0; i < count; i++)
            for (int x = 0; x < size; x++)
                for (int y = 0; y < size; y++)
                    total += array[x, y];
        watch.Stop();
        Console.WriteLine(String.Format("Sequential access by [x,y]: {0}ms", watch.ElapsedMilliseconds));
        watch = System.Diagnostics.Stopwatch.StartNew();
        for (int i = 0; i < count; i++)
            for (int x = 0; x < size; x++)
                for (int y = 0; y < size; y++)
                    total += array[y, x];
        watch.Stop();
        Console.WriteLine(String.Format("Sequential access by [y,x]: {0}ms", watch.ElapsedMilliseconds));
        Console.WriteLine(total);
        Console.Read();
    }
}

 

The Output

Sequential access by [x,y]: 825ms
Sequential access by [y,x]: 2414ms
0

 

What a difference! Incrementing the first index in the inner loop takes almost 3 times longer, probably because there are so many more cache misses.

 

The Verdict

For best performance, process your rectangular arrays by incrementing the first index in an outer loop, and the second index in an inner loop.

Concurrency Encapsulation

One of my projects is a perfect candidate for concurrency. There is a collection of entities that each need to be processed, and each can be processed independently. For the sake of Separation of Concerns, I developed a WorkManager class dedicated to performing an action on the elements of an enumeration. Before I spill the code, here is how it is used:

Entity[] myCollection = LoadEntities();
WorkManager manager = new WorkManager();
manager.ForEach(myCollection, e => e.Process());

Note the lambda expression above. It means that for each Entity in myCollection, the code will execute its Process method. The program will create a separate thread for each processor, and evenly distribute the workload across the threads.

And without further delay, the WorkManager class:

Continue reading Concurrency Encapsulation

Cleanup WSUS – Remove Computers No Longer in the Domain

One thing I love about WSUS is the ability to monitor the presence of clients. It gives me a good approximation of the last time a computer was on the network. I often use this information to help me clean missing computers out of Active Directory.

But what about when a computer is removed from the domain before it is removed from WSUS? Rather than manually checking, I wrote an IronPython script that compares the list of computers in Active Directory with the computers on WSUS. When I run this script, it lists computers that should be removed from WSUS, and deletes them for me (after prompting).

Continue reading Cleanup WSUS – Remove Computers No Longer in the Domain

Embedding IronPython

In my current programming project, I’ve embedded IronPython in a C# program. I thought I would share the basics of embedding a scripting engine. I imagine the process would be the same for any language that uses the DLR (Dynamic Language Runtime), like IronRuby. Here is a sample using IronPython 2.0 Beta 5.

using System;
using IronPython.Hosting;
using Microsoft.Scripting;
using Microsoft.Scripting.Hosting;
public class Program
{
    // Delegate matching the signature of the factorial function
    delegate int FactorialDelegate(int n);
    static void Main(string[] args)
    {
        // Our factorial function
        string[] lines = {"def factorial(n):",
                          "  for i in range(1, n):",
                          "    n = n * i",
                          "  return n"};
        string code = String.Join("\r", lines);
        // Instantiate the IronPython environment
        ScriptEngine engine = Python.CreateEngine();
        // Create a scope/module to work in
        ScriptScope scope = engine.CreateScope();
        // A little preparation
        ScriptSource source = engine.CreateScriptSourceFromString(code, SourceCodeKind.Statements);
        // Compile the code
        CompiledCode compiled = source.Compile();
        // Execute the code in the scope
        compiled.Execute(scope);
        //Now the factorial function exists in the IronPython environment. Let's use it.
        // Set x = 5
        scope.SetVariable("x", 5);
        // print factorial(x)
        ScriptSource print = engine.CreateScriptSourceFromString("print factorial(x)", SourceCodeKind.SingleStatement);
        print.Execute(scope);       //outputs 120
        // Get the result from IronPython
        int result1 = scope.Execute<int>("factorial(6)");
        Console.WriteLine(result1); //outputs 720
        // We can also call the function directly from C#
        FactorialDelegate factorial = scope.GetVariable<FactorialDelegate>("factorial");
        int result2 = factorial(7);
        Console.WriteLine(result2); //outputs 5040
        Console.Read();
    }
}

Math.BigMul Exposed

Today a friend and I were reflecting through System.Math (courtesy of IronPython) and we noticed the BigMul method:

Math.BigMul(Int32, Int32) : Int64

Why have a method just for multiplication? It seems to be a trivial reason to add a method to the .NET framework. After all, multiplication with casting does the same thing:

(long)a * (long)b

Being optimistic, I suggested that perhaps Microsoft’s BigMul is implementing a faster and more efficient multiplication algorithm. Maybe there is a clever way to multiply two 32 bit numbers without explicit casting to 64 bit. Naturally, I wrote a simple speed test.

static void Main(string[] args)
{
    int a = 40993;
    int b = 69872;
    long c = 0;
    DateTime start;
    TimeSpan length;
    Console.WriteLine("Inline multiplication");
    start = DateTime.Now;
    for (int i = 0; i < 1000000000; i++)
        c = (long)a * (long)b;
    length = DateTime.Now - start;
    Console.WriteLine(c);
    Console.WriteLine(length.ToString());
    Console.WriteLine();
    Console.WriteLine("Math.BigMul");
    start = DateTime.Now;
    for (int i = 0; i < 1000000000; i++)
        c = Math.BigMul(a, b);
    length = DateTime.Now - start;
    Console.WriteLine(c);
    Console.WriteLine(length.ToString());
    Console.WriteLine();
    Console.Read();
}

The results were not encouraging.

Continue reading Math.BigMul Exposed

Debugging with Internet Explorer and Visual Studio

I prefer Firefox as my default web browser. For website development, Firefox with Firebug
is a killer combination. But when I test a page in Internet Explorer, I sometimes get cryptic Javascript errors that are impossible to track down. It turns out that Internet Explorer needs some coaxing to play nice with Visual Studio.

Enable Javascript Debugging
Open Internet Options and head over to the Advanced tab. Uncheck Disable script debugging (Internet Explorer).

debugging-ie-1

That’s all it takes; the next time you encounter a Javascript error, you will be prompted to debug. Select an instance of Visual Studio and you’ll have interactive debugging.

Launch with Internet Explorer
When you run or debug a website, Visual Studio uses you default system browser. But maybe you prefer to debug in a different browser. To change this setting, right-click on an aspx file in Solution Explorer and select Browse With. Select the desired browser and set it as default.

debugging-ie-2

That’s it! Do you have any tips or tools for website debugging?

Linguistic Networks

The following is a report I wrote for my Artificial Intelligence course in November 2006. It discusses Franklin Chang’s article entitled “Symbolically speaking: A connectionist model of sentence production.”

linguist-cover_2F. Chang developed neural networks to produce proper sentences from basic messages. His networks were implemented on LENS neural network software. This research was performed around the year 2000 and a paper detailing his research and findings was published as an article in Cognitive Science in 2002. In this paper, I will explain, summarize, and analyze his article.

Background

Because Chang’s models are neural networks mimicking human linguistic abilities, comprehension of his work relies on familiarity with linguistics and connectionist models. Linguistics will be addressed as necessary throughout this paper. A neural network is a computing model comprised of nodes (also called units) grouped into layers. The nodes between two layers are connected by weights. As nodes are activated, the activations of each layer are passed forward to the next layer by the connecting weights. Layers are connected in a forward-feeding fashion so that there are no loops. The value of a weight determines how much of the activation from the sending node is transferred to the receiving node. The total of all the activations received by a node determines its activation value. The result is a network of nodes that pass activations forward through the network. Activations originate from the input layers, which serve as the input for the network. These activations feed-forward to the output layers, whose activations serve as the output of the network. Learning occurs as weights are adjusted to correct the actual output values to match the desired output values.

Abstract

Chang’s article discusses and compares several connectionist models for properly constructing sentences appropriate to given messages. To achieve this goal, the models need to learn the proper vocabulary and grammar for expressing messages. Traditional approaches have relied heavily on statistics, employing simple recurrent networks (SRN). This approach theorizes that by learning the statistical regularities of a sufficiently large training set, an SRN model can generalize to correctly produce sentences for similar messages not found in the training set. However this approach generally doesn’t generalize as well as desired. While it can correctly learn the training set, the SRN model poorly applies what it has learned to novel situations. To overcome this difficulty, Chang develops the new dual-path model. The goal of the dual-path model is to utilize symbols in its processing. This model separates the actual words from the context in which they are used. This abstraction allows the dual-path model to learn how the contextual symbols of a message are arranged independently of learning the contents of the symbols.

Summary

linguist-image1_2A message is an event comprised of an action (verb) and the related participants. “The participants in an event [are] classified into one of three event roles: agent, patient, goal. The agent [is] the cause of the action, the goal [is] the final location for the object, and the patient [is] the object in motion or the affected object.” (612) For example, in the simplified sentence the man give the boy a plate, the underlying message would be comprised of the action give, the agent the man, the patient a plate, and the goal the boy. These messages are abstracted from any surface sentence structure. Additional information is also included in the message, such as whether a participant is a definite participant (marked by the in a sentence), and various event semantics. Event semantics indicate features in a message. In our example, the man give the boy a plate, there is a cause (the man initiates the action), a motion (the plate is moved), and a transfer (the boy comes to possess the plate). These event semantics are important for distinguishing between possible uses of a verb.

The models are challenged with building proper sentences, something that takes years for a human to become proficient at. There are two aspects to this problem, correct lexical selection and proper sentence-level ordering. The lexical aspect involves representing each feature in the message with the correct word in the sentence i.e., choosing the correct vocabulary word to express a concept. For example, when the feature is DOG is in the message, the program must output the correct vocabulary word dog in the sentence. At the sentence-level, the words must be placed in the correct positions and order. To further illustrate our example, if the action is BARK, then the correct sentence would be a dog bark, not bark a dog. It is important that the network learns these two aspects independently of each other. Otherwise it may not be able to place words into novel positions that the network hasn’t experienced before.

Simple Recurrent Network

linguist-SRN_2The Production Simple Recurrent Network (Prod-SRN) is the first network model employed by Chang. This model has three  layers feeding the hidden layer: cword, message, and context. The hidden layer feeds an output layer: word. The model is recurrent, each cycle producing the next word of the sentence. Before the cycles begin, the message layer is activated with the features present in the message. These activations are static and do not change once initially set. Before each cycle, the cword layer is activated to represent the previous word in the sentence. Also, the previous activations of the hidden layer are copied to the context layer. This allows the context layer to serve as a memory of the previous cycle. During each cycle, activations feed-forward through the network. The resulting activations of the word layer indicate the word produced by the network. During learning, weights are modified using back-propagation to help match the actual output to the desired output. This process is repeated until the end of the sentence is produced (indicated by an end-of-sentence marker).

The results of this model are disappointing. In Chang’s research, the Prod-SRN model reaches 99% accuracy in learning the training set. However Prod-SRN performs very poorly on the test set, never topping 13% accuracy. It seems the knowledge learned by the Prod-SRN model is not of much use outside the training set. It does not learn lexical selection separately from sentence-level placement. The two are too tightly bound, hindering its ability to place words into new positions. This hindrance is due to the single mechanism used for learning both the lexical and sentence-level aspects of sentence construction. In short, Prod-SRN generalizes poorly.

Dual-Path Model

linguist-DPM_2When Chang set out to design an improved model, he wanted it to be better suited to generalize in novel situations. He devised the dual-path model, with two distinct paths through the network. These two paths give the model two mechanisms with which to learn sentence construction. One pathway is the sentence-level pathway. The first half of the sentence-level pathway sequentially leads from the cword layer, to the cwhat layer, to the cwhere layer, to the hidden layer. Just as in the Prod-SRN model, the cword layer is activated with the previous word of the sentence. The cwhat layer represents the feature associated with the previous word. The cwhere feature represents the role associated with the previous word. This structure causes the activations reaching the hidden layer in the sentence-level pathway to be abstracted from the actual word, symbolically representing only the role of the current word in the message. There is one cwhat node for each possible feature and one cwhere node for each possible role. The cwhatcwhere weights are set according to the message and are static throughout a sentence’s production. For example, if the action is GIVE then the weight between the ACTION node in the cwhere layer and the GIVE node in the cwhat layer would be set as positive. And if the agent is MAN then the weight between the AGENT node in the cwhere layer and the MAN node in the cwhat layer would be set as positive. All other unused weights between the cwhat and cwhere layers are set to zero. These bindings represent knowledge of the message in the network.

linguist-image4_2The second half of the sentence-level pathway is a reversal of the first half. The hidden layer leads to the where layer, then to the what layer, and finally to the word layer. The weights between the where and what layers are a mirror copy of the weights between the cwhere and cwhat layers. This way the hidden layer can receive role activations from the first half of the pathway, and send role activations through the second half. The activated roles are translated back into the related features by the where-what weights.

The other pathway in the dual-path model influences lexical selection. It leads from the cword layer, to the ccompress layer, to the hidden layer, to the compress layer, and finally out to the word layer. This pathway learns in a manner similar to the Prod-SRN model, except that the layers are smaller because the dual-path model does not solely rely upon this pathway as a means of sentence construction.

Three additional layers feed the hidden layer. The cwhere2 layer contains the summation of previous cwhere activation levels. This serves as a memory of roles previously activated during sentence production. The context layer is a copy of the hidden layer’s previous activations. This also serves as a memory, just as in the Prod-SRN model. Finally, the events-semantics layer encodes the event semantics of the message. There is a unit for each possible event semantic, and appropriate units are set as active during the duration of sentence production.

Test Results

linguist-image5_6The dual-path model is also tested and compared to the Prod-SRN model. Not only does it learn the training sets just as well as the Prod-SRN model, it also correctly constructs about 80% of the sentences in the testing set. This is much better than the 13% accuracy achieved by the Prod-SRN model. The success of the dual-path model is owed to the symbolic processing achieved by separate mechanisms for lexical selection and sentence-level ordering. This allows the dual-path model to determine the placement of the words independently from selection of the words. The result is a markedly improved ability to generalize compared to the Prod-SRN model.

Two additional models are created as modifications of the dual-path model. They both differ from the dual-path model in only one way. This serves to highlight the importance of particular features in the dual-path model. The no-event-semantics model is identical to the dual-path model, except that it lacks the event-semantics layer. Without event semantics, the model losses information about how the message is to be constructed. In practice, it learns the training set as well as any other model, but produces sentences in the testing set with only 50% accuracy. This decrease in performance highlights the importance of event semantics in sentence construction.

The final model was the linked-path model. It differs from the dual-path model in that it has a link between the hidden layer and the what layer. This violates the dual-path architecture because it circumvents the symbol abstraction provided by the sentence-level path. Sometimes a network can perform better with extra layers or links between layers added for complexity. This alternate model was developed to test if the success of the dual-path model is primarily attributed to its extra complexity, or to the two separate pathways. The results show that the linked-path model learns the training set just as well as the other two models, but can only achieve 70% accuracy on the testing set. In fact after a certain point, the accuracy of the linked-path model decreases with additional training. This suggests the linked-path model starts to memorize the training set, a problem that is avoided in the dual-pathway architecture. It can be concluded that the dual-path model’s success is due to its separation of lexical and sentence-level processing.

Additional testing and analysis of the dual-path model suggests that it learns in ways similar to humans. When the system is damaged (by randomly changing weights) in particular regions that correspond to regions of the brain in impaired patents, it is found to generate errors similar to the human patients’ errors. Other testing shows that as the dual-path model learns, it over-generalizes and sequentially constrains its over-generalizations in a fashion similar to children. As the dual-path model learned, it over-generalizes the use of verbs into improper constructions because the sentence-level pathway allowed it to place verbs a number of ways. But as continues to learn, the lexical pathway begins to constrain the over-generalizations to prevent particular verbs from being used improperly. This same pattern of behavior can be observed in children. The similarity between human examples and the dual-path model is a natural result of the dual-path model being designed specifically to mimic human-like use of symbols in sentence construction.

Analysis

Chang’s work is impressive because he successfully unites several fields of study into a fully functional model. His accurate linguistics model is constructed as a compromise between two connectionist schools of thought, one statistics based, the other modularly based. Chang borrows from both, building a modular sentence-level pathway and a statistical lexical pathway. I think the most innovative feature is the representation of the message as bindings between the word and where layers. Although not discussed in the article, I imagine that the most difficult aspect of the project would have been the generation of valid messages and sentences to as training and testing sets.

Chang’s analysis of the dual-path model reveals close similarities to observed human linguistic abilities, development, and hindrances. Therefore, it helps confirm the linguistic theories that it is built upon. It raises interesting questions about what other seemingly complex human behavior can be mimicked given an insightful model. Further research could be conducted with an expanded domain, one with an increased lexicon (Chang only used 63 words) and a greater assortment of event semantics to produce a greater variety sentences. Other extensions might involve adding noun-verb agreement or verb-tenses. Additionally, the system could be reversed to extract a message from a sentence.

WinOthello – A C# implementation of Reversi

The following is a report on my C# program, WinOthello. I designed it for my Artificial Intelligence course in November 2006.

Download (57 KB) the game or the source code (85 KB).
Provided under a permissive open source license.
.NET Framework 2.0 required.

Abstract
WinOthelloAs an exploration of artificial intelligence and particularly game state space search, I developed WinOthello to simulate Othello games. This application allows human and computer players to interact with a virtual game board displayed on the user’s screen. The primary objective of WinOthello was to devise an adept computer player that proves challenging to average human players. This goal was achieved via a depth-limited minimax search of the game space and quickened with the application of alpha-beta pruning.

What is Othello?
Othello is a strategic two-person board game. Players take turns placing pieces on an eight-by-eight board. Each piece has two distinct faces: a light face and a dark face. The first player is designated the dark player and places pieces on the board dark side up. The second player is the light player and places pieces light side up. Pieces can only be placed in positions that outflank the opponent’s pieces. Outflanking occurs when a player places a piece that causes a contiguous line of one or more of the opponent’s pieces to be bounded by the player’s pieces. All of the opponent’s outflanked pieces are then flipped over, and the turn ends. If a player can not place a piece, then play passes to the other player. The game ends when neither player can place any more pieces. The player with the most corresponding pieces on the board at the end of the game wins.


Why Othello?
I choose to explore Othello because it is a perfect candidate for applying state space search. This falls under the game playing field of artificial intelligence. Othello is suitable because it is a two-person game. One player can be viewed as trying to maximize his score, while the second tries to minimize the first player’s score. This is a foundational concept of minimax. Additionally, Othello provides well-defined moves, meaning that for any game state, a list of all possible immediate moves can be generated. The generation of moves allows the projection of later game states, another requirement of minimax. In Othello pieces can only be placed and can not be moved or removed. This means that only a finite number of moves are possible, which simplifies a minimax search. Another convenience is that Othello is purely strategic and doesn’t rely upon chance. This makes the game easy to accurately model. All of these factors make Othello an excellent choice for applying AI techniques.

Goals
My primary goal was to develop an effective Othello computer player. I planned on using a depth-limited minimax search to find good moves. As a further enhancement, I wanted to apply alpha-beta pruning to reduce the search time. The resulting computer player should decide upon a move within a reasonable amount of time, and win almost every game against an average human player. This portion of my project is referred to as the engine and directly relates to artificial intelligence.

My secondary goal was to develop a pleasant Othello GUI. The board should be displayed to the user and allow simple clicks to place pieces. Game settings are to be set at the start of each game. Additional features could include hints for human players and a game history allowing the undo of a move. This portion of my project is referred to as the GUI and isn’t directly related to artificial intelligence.

Approach
Minimax is at the heart of my Othello project. Minimax involves systematically searching possible future gamestates to decide the best move in the current situation. Starting with the current gamestate as the root node, child nodes are created for each currently plausible move. This generation of nodes continues recursively until a specified depth or an endgame situation. After node generation, each of the bottom nodes is evaluated. The higher a node’s value, the more advantageous it is to the current player (MAX). The lower a node’s value, the more advantageous it is to the opposing player (MIN). These values bubble up to their parent nodes. A parent node chooses the move leading to the maximum child node if the node’s game state represents MAX’s turn. If it is MIN’s turn, the parent node chooses the move leading to the minimum child node. This assumes that MAX will always wants to choose a move that maximizes his score, and MIN will choose a move than minimizes MAX’s score. Once the root node chooses a maximizing move, that move is chosen for play.

As an optimization, I extended the minimax search with alpha-beta pruning. This greatly reduced computation time without changing the result of the search. Alpha-beta pruning works by halting the generation and evaluation of nodes for portions of the search that will not be chosen anyway. To make the pruning more efficient, moves are sorted before evaluation with the most promising moves checked first. This increases the likelihood that poor moves will be pruned from the search. The decreased computation time allowed me to increase the depth of search. In general, greater search depths should result in better move decisions.

Static Evaluator
I implemented a very simple static evaluator. It simply counts the number of pieces MAX has on the board, and subtracts the number of pieces that MIN has on the board. The resulting number reflects the relative advantage that MAX has over MIN, with a high number indicating a higher advantage. If the gamestate is an endgame situation, a value of 100 is returned for MAX’s win, and -100 for MAX’s loss (or a tie).

Plausible Move Generator
The plausible move generator results a list of all possible moves available to a given player for a given gamestate. This turned out to be a tricky section of code that scans outward from each board location for possible ways to outflank the opponent. Each possible move location is also associated with a value indicating the number of pieces that would be flipped by the move. This value is used to sort the moves for effective alpha-beta pruning.

Results
WinOthello is an effective application of artificial intelligence. The engine implements minimax searching and prunes the search space to greatly reduce the computation time. Despite this efficiency, there is still a trade off between difficulty and quickness. The computer player’s difficulty is directly related to the depth of the minimax search. The greater the difficulty, the deeper the search and the longer it takes. A depth of six takes only seconds on modern computer hardware. I am relatively new to Othello and do not consider myself an expect player by any means. So if I can regularly beat the computer player, I consider it to be a weak player. Increasing the difficulty to 10 makes the computer a much more difficult opponent. However, at this depth, it can take the program about a minute for it to choose a move. It is debatable if this is a reasonable amount of time. It seems that, in WinOthello’s current form, challenging game-play and quick computation are mutually exclusive.

The GUI is simple, understandable, and easy to use. Many of the features outlined as goals were implemented. However, the undo feature hasn’t made it into the game yet. I suspect that it would be fairly easy to add, but I haven’t investigated its implementation yet. An additional feature I added near the end of WinOthello’s development was a game tree viewer. This function displays a tree containing each node traversed in the last minimax search performed by the program. The tree shows rudimentary information such as if a node is MAX or MIN, the value of the nodes, the fraction of each nodes’ children that were actually searched, and the best move selected from the node’s children. I consider this feature incomplete as it was added late for debugging purposes. I decided to keep it for its educational value.

Comparison
Many other people have developed Othello playing programs. Logistello was a strong Othello computer program written in the 1990s by Michael Buro.  This program rose to prominence during its time, winning computer Othello tournaments and soundly defeating the world champion, Takeshi Murakami, in August 1997. It used a more complex static evaluator that considered Othello strategies such as mobility and parity. Logistello also used a book of opening moves that guided its moves at the beginning of the game. It even learned, expanding its book as it played each game. WinOthello does not even begin to approach the complexity of Logistello. Although Logistello is far superior, WinOthello was still a useful exercise in applying and exploring artificial intelligence. A quick comparison to Logistello suggests that WinOthello could greatly benefit from an enhanced static evaluator.

Program Architecture
WinOthello is written in the object-orientated language C#. I will only discuss the architecture of the project’s engine because the GUI is not relevant to artificial intelligence. The board is represented by a board class. The class contains a pair of two-dimensional Boolean arrays, each eight-by-eight, one array for each player. Each piece on the board is represented by a true value in the corresponding array. Various operations can be performed on the board, such as getting a position’s color, testing the possibility of using a location for a move, getting the players’ scores, making a move, copying the board, running the static evaluator, and running the plausible move generator. The game knowledge is primarily represented by this class. Most other classes in the engine operate on this board class.

A player object represents each player. This class indicates if a player is human or computer, the difficulty level, and the player color. This class contains the recursive searching method used to get a computer player’s next move.

A move object also exists. It is used to store the moves found by the plausible move generator. Each move contains the location of the move on the board, a value representing the number of pieces immediately flipped as a result of the move, and a random number used to randomize the order of moves with the same value.

The game object serves as an interface for the GUI to access the engine and controls most aspects of the simulated Othello game. Methods are provided to start the game and allow players to move. Values such as the location of pieces, the scores, and the current turn can also be accessed through the game object. Because of WinOthello’s modular design, a completely different interface can be written that uses the exact same engine without having to rewrite any code game-specific code.

Extensions
Far more could be done with WinOthello. The static evaluator could be enhanced with better heuristics. Various parts of the program could be optimized. Some sort of memory could be added to quicken searches by recalling previously generated moves. An undo feature could be incorporated. Alternative interfaces could be implemented, such as a command line or web interface.

Reflection
My Othello journey has been both exciting and disappointing. I was happy to make quick progress in developing a GUI and working game engine, and thrilled to see it actually make good moves. However, my chosen approach proved limited. The simple static evaluator seems to be the program’s weak point. I choose it because I wanted to keep the design simple, and because I assumed that the program would choose good moves if it was allowed to search the game space deep enough. However time proved to constrain the search depth more than I anticipated. The program isn’t allowed to search as deep as it needs to effectively use such a simple static evaluator.

The greatest difficulty was in trying to design an elegant program structure. I started rewriting the engine several times, once completely rewriting it. It could have been beneficial to map out the program in detail before I started coding. However, given the experimental nature of the assignment, I think my approach was appropriate. I wasn’t sure of what would work best until I had tried it.

An interesting question raised by my project is what constitutes artificial intelligence? My program uses a brute force method to find an ideal solution. Such systematic probing does not seem very humanlike. Even a “smarter” Othello program works by searching a game space and evaluating game states based on preprogrammed heuristics. It seems that all the intelligent behavior stems from the programmer. Any program’s “intelligence” is a result of its design. Programs are simply tools imbued by their creators with a distinctly artificial form of intelligence.

Genius – Made or Born?

The following is a report I wrote for my Artificial Intelligence course in September 2006. It discusses an article by David Dobbs entitled “How to be a genius.” This report also references an email exchange (source not available) between Douglas Hofstadter and David Dobbs.

In the email conversation between Howard Rutiezer and Douglas Hofstadter, Rutiezer presents an article by David Dobbs while Hofstadter rebuttals it with his own observations. The topic of discourse is the relationship between innate intelligence or talent, and greatness in terms of success or recognition. Dobbs contends that although some people are inherently smarter or more talented than other people, their gifts do not guarantee their greatness in terms of success. Furthermore, people who achieve greatness are not necessarily more gifted than usual. Dobbs argues that greatness is achieved by fervent and lengthy hard work, and is not dependant on a person’s inherent traits. Hofstadter strikes back, presenting his personal observations and experiences as counterexamples to Dobbs’ viewpoint. Hofstadter asserts that hard work alone can not achieve greatness; it is necessary to also be specially gifted.

Dobbs starts with a narrative, explaining how he enjoyed natural intelligence and writing talent in his youth. But as he matured, he discovered that his natural talents would not carry him far unless he intently dedicated himself to his work. He cites studies in which people who have achieved greatness are shown to have high IQs on average, but not remarkably high IQs. This indicates that another factor is at work, an individual’s drive and willingness to work. According to Dobbs, some factors leading to greatness are constructed as a child. An environment encouraging children to explore and learn is thought to contribute to the child’s potential for greatness. Even then, Dobbs believes that someone who has achieved greatness is rarely exceptional outside of their field of expertise. Their greatness is focused in one area of practice that they have dedicated themselves to, and often doesn’t spillover into unrelated areas of life. Thus, greatness is the result of our (sometimes average) inherent talent being fully realized through massive amounts of dedicated hard work.

However, Hofstadter very much disagrees with Dobbs. As a graduate student, he found himself unable to ascend his understanding of math above a certain level of abstraction, despite his strenuous efforts to overcome this barrier. Thus, he believes that although hard work will certainly pay off, no amount of hard work can ensure greatness for someone who simply was born without the related capacity for greatness. He also contends that there are many who achieve greatness with only a modest amount of effort; they are carried almost entirely by their inherent abilities. Therefore Hofstadter believes that greatness is in large part determined by the abilities one is born with, and dedicated work plays a small role in unlocking that potential.

Who is correct? Is genius made or born? The issue is widely debated. Hofstadter mostly draws upon his personal observations and experiences, while Dobbs also supports his assertions with several large studies. Personally, I believe that both natural talent and dedicated work are important for achieving success. But, like many things in life, there are obvious expectations. Some people are natural cooking connoisseurs or chess masters and put forth only modest amounts of effort. Perhaps we aren’t all capable of being so exceptional, but most people can be reasonably successful with enough hard work and only modest natural talent.