Index of Languages

FORTRAN (~1953)
LISP (1958-)
ALGOL (1958)
COBOL (1959)
SIMULA (1964-1967)
PASCAL (1968/70)
ML (1973)
The C Programming Language (1972)
ADA (~1978/79 ->)
C++ (1979)
SmallTalk (1971-80)
JavaScript (JS)
The “R” Programming Language
Visual Basic .NET (~2001/2)

There have been many programming languages over the years, often developed by dedicated, passionate, and indeed very geeky individuals who’s passion has been for elegant abstractions of the ones and zeros upon which standard computers run.
Here are there origin stories.


Thinking about the history of programming languages, we must travel back to the pre-cursers of modern languages in the form of things such as the Jacquard loom of the early 1800s which was able to make patterns based on cards inserted in the machine (essentially pre-programmed). Beyond that, we can look at the story of Mr. Charles Babbage’s Analytical Engine in the 1830s & 1840s as being also a key point in the history of computer science.

Within this timeline, we often come across the story of Ada Lovelace – a woman credited with writing the “first computer program” on Babbage’s  Analytical Engine. Whilst this is an oversimplification of the true story, it’s a story often cited, and indeed it’s where the Ada programming language gets its name from (we’ll learn more about this important but little known language later, and it’s more modern variant Ada 95).

A visit to the Computer History Museum in Mountain View, California will likely give the reader a better understanding of how the story of computer hardware developed with the computer “program” over time (Ref#: A). As they remind us: “Thousands of programming languages have been invented, and hundreds are still in use. Some are general-purpose, while others are designed for particular classes of applications. [Yet] Few new languages are truly new; most have been influenced by others that came before” (

In the 1940s a gentleman named Von Neumann lead a team that built computers featuring the use of stored programs, as well as a central processor.

One of these machines was ENIAC which had to be programmed using patch cords. The involvement of Von Neumann lead to a new binary based machine that could store information, where the ENIAC originally could not.

In the 1940s machine-code was used directly as the sole means of programming computers, however, it wasn’t long before the idea of abstracting on top of this emerged as it was seen as tedious and potentially error-prone.

If we talk about high-level programming languages (HLLs), then an early example of one of these was Plankalkül (which means plan calculus). This was developed by a gentleman called Konrad Zuse between the years of 1942 and 1945 roughly alongside his development of what has been called the first “working digital computer”(Ref#: B). This early attempt to develop what we would now call a programming language did not end up with practical uses at the time, nevertheless, it was remarkable how many “standard features of today’s programming languages” Plankalkül had (Ref#: C).

Related image



Another early higher-level language was Short Code. Developed by John Mauchly in 1949 and was originally known as Brief Code, it was used with the UNIVAC I computer after William Schmitt made a version for it in 1950.

Following on from John’s work (and also working on the UNIVAC) Richard K. Ridgway and Grace Hopper developed the A-O System, which is called “first compiler system” in 1951/52, although it was more of a loader/linker compared with a modern compiler.
In General, we have seen the following themes emerge across the decades:


A Visualization Of A Graph DB of Connections Between Languages















We’ve summarized the historical context, so let’s move on now to the stories of our main programming languages.

back to index

FORTRAN (~1953)

Fortran (or FORTRAN, from Formula Translation) is a general-purpose, imperative programming language that is especially suited to numeric computation and scientific computing.

It was a time of great change and technological advancements, with the world still recovering from the devastating effects of the Second World War. The year was 1953, and music was still in quite a familiar war-time style with popular songs like ‘A Sunday Kind of Love’ by The Harptones and ‘Have You Heard?’ by Joni James. It was also the year when Queen Elizabeth II, who was for many years the beloved reigning monarch of Britain, was crowned Queen following the passing of her father, King George VI. The end of the Korean War was also announced, and the presidency of the United States was handed over to Dwight D. Eisenhower (a 5-star general who had been responsible for planning and executing the successful D-Day invasion of Normandy).

The idea behind this language was to make it a lot easier for people to translate things like mathematical formal into something machines understand but in a way that was intuitive and readable. FORTRAN is considered to be one of the first HLLs (High-Level Languages) to achieve widespread adoption – what this means is that it’s one of the first languages that abstract away the low-level operations of the CPU, thus allowing the programmer to deal more conceptually with the algorithms he or she is writing.

FORTRAN is Procedural 

The 1950s were a time of rapid technological advancements and computers were no exception. With their increasing popularity, computer experts sought ways to make these machines easier to use and more accessible to the general public. This led to the development of procedural programming.

Procedural programming involved breaking down complex computations into smaller, more manageable steps called procedures or functions. These procedures could be invoked at any point during the execution of a program and even by other procedures. This approach made writing code much simpler and more efficient.

As a result, several new programming languages emerged, including Fortran, ALGOL, COBOL, and BASIC. Fortran, with its focus on numerical computations and scientific computing, quickly gained popularity and recognition. Its user-friendly and intuitive approach was ahead of its time and continues to inspire programming languages to this day.

Amidst this exciting era, FORTRAN emerged as a truly remarkable development. Brought to life by the experts at IBM, FORTRAN was designed specifically for scientific and engineering applications. Its powerful capabilities and intuitive design soon established it as a dominant force in this field of programming.

For over half a century, FORTRAN has remained at the forefront of some of the most computationally intensive areas of study. From numerical weather prediction to finite element analysis, computational fluid dynamics, and beyond, FORTRAN has proven itself as a vital tool for scientific discovery.

And as the race for greater computing power continues, FORTRAN remains a popular choice for high-performance computing. It is the language behind programs that rank and measure the world’s fastest supercomputers, cementing its place in the annals of computing history.

Unlike a language like ALGOL, FORTRAN was not designed to handle complex functions, including the highly intricate Ackermann Function.

The Ackermann Function is an important example of a recursive function and is used in computer science as a means to demonstrate the capabilities and limitations of programming languages. This function is named after Wilhelm Ackermann, a German mathematician, and is an example of a well-known, but very complex, recursive function.

The Ackermann Function works by defining a sequence of rules, or “recursive calls,” that become increasingly complex and difficult for the computer to process. The function becomes so complex that it quickly exceeds the computational limits of many programming languages, including FORTRAN.

As a result, FORTRAN’s inability to effectively handle the Ackermann Function highlights its limitations in handling complex, recursive functions and serves as an important reminder to computer scientists and programmers to consider the strengths and weaknesses of a given programming language when choosing a suitable tool for the task at hand.

(Ref#: B).

Key People

John Backus
John Backus 2.jpg
Born in 1924, the same year as IBM itself was born, Backus was a man with a really interesting life story. Backus eventually moved to New York and then began working on key projects for IBM.

The Team

The FORTRAN team was put together gradually, beginning with Irving (“Irv”) Ziller and John Backus, and a short time later they were to be joined by Harlan Herrick, then they hired Robert “Bob” Nelson as a technical typist. Then Sheldon Best from MIT came along, and Roy Nutt from United Aircraft. Subsequently Peter Sheridan, Dave Dayre, Lois Haibt, Richard “Dick” Goldberg etc.

Richard Goldberg, Irving Ziller and John Backus

Richard Goldberg, Irv Ziller and John Backus

Irv Ziller

“In late 1953, Backus wrote a memo to his boss that outlined the design of a programming language for IBM’s new computer, the 704. This computer had a built-in scaling factor, also called a floating point, and an indexer, which significantly reduced operating time. However, the inefficient computer programs of the time would hamper the 704’s performance, and Backus wanted to design not only a better language but one that would be easier and faster for programmers to use when working with the machine. IBM approved Backus’s proposal, and he hired a team of programmers and mathematicians to work with him” (SOURCE: E).

The chronology was that it was in 1949 that he began working on IBM’S SSEC computer he then worked in the famous Watson Lab in the period of 1950-1952, and beyond that, he and his team’s work published his work on FORTRAN in 1954.

Language Features

Fortran had what it called Do-loops (which were a bit like for-loops) and what it used to implement programming loops using a counter, and you could nest these. However, it did not support user-level recursion due to employing a single stack-frame, making it a bad solution for something like an Ackermann function which is innately recursive, whereas it could work for something primitively recursive like a Fibonacci calculating function (Ref#: B).

There are actually a bunch of different versions of FORTRAN which have evolved over time:

FORTRAN II appeared in 1958 and looked like this:

SOURCE: Wikipedia

Versions of FORTRAN

There have been several versions of the programming language.

“In 1958IBM released a revised version of the language, named FORTRAN II. It provided support for procedural programming by introducing statements which allowed programmers to create subroutines and functions, thereby encouraging the re-use of code.
FORTRAN’s growing popularity led many computer manufacturers to implement versions of it for their own machines. Each manufacturer added its own customisations, making it impossible to guarantee that a program written for one type of machine would compile and run on a different type.

 IBM responded by removing all machine-dependent features from its version of the language. The result, released in 1961, was called FORTRAN IV” (Ref#: I).



The below example shows the use of the “Hollerith constant”, a way in which early version of the language including FORTRAN 66 dealt with character strings by essentially converting them to some numerical representation; “13H” means that the 13 characters after this will be treated as a “character constant”.

“The space immediately following the 13H is a carriage control character, telling the I/O system to advance to a new line on the output. A zero in this position advances two lines (double space), a 1 advances to the top of a new page and + character will not advance to a new line, allowing overprinting” (SOURCE: F).


FORTRAN 90 (95)

This added many of the features of more modern programming languages, including support for recursion, pointers, CASE statement, parameter type checking, and many other changes.



FORTRAN was made for the use of mathematicians and scientists and indeed it still to this day enjoys a certain level of popularity in being used in these areas, indeed it still is widely used by Physicists and “in a survey of Fortran users at the 2014 Supercomputing Convention, 100% of respondents said they thought they would still be using Fortran in five years”(Ref#: J). In fact product in 2019 were still being produced with support for a version of FORTRAN such as “Intel Parallel Studio XE 2019” (Ref#: G), and also the “OpenMP” API which is used to explicitly direct multi-threaded, shared memory parallelism, these were designed for shared-memory machines, this relates also the topic of Parallel Computing (Ref#: H).



back to index


LISP (1958 -)

LISP is a symbolic language that can be challenging to learn which is used in academic circles for applications with main uses in artificial intelligence.
“Lisp (historically LISP) is a family of computer programming languages with a long history and a distinctive, fully parenthesized prefix notation. Originally specified in 1958, Lisp is the second-oldest high-level programming language in widespread use today. Only Fortran is older, by one year. Lisp has changed since its early days, and many dialects have existed over its history. Today, the best known general-purpose Lisp dialects are Clojure, Common Lisp, and Scheme” (Wikipedia).
Lisp is considered Declarative. However, in-fact, Lisp is actually multi-paradigm: it’s procedural as well as functional.

Declarative programming is often defined as any style of programming that is not imperative. A number of other common definitions attempt to define it by simply contrasting it with imperative programming. For example:

  • A high-level program that describes what a computation should do.
  • Any programming language that lacks side effects (or more specifically, is referentially transparent)
  • A language with a clear correspondence to mathematical logic.


These definitions overlap substantially.

Declarative programming contrasts with imperative and procedural programming. Declarative programming is a non-imperative style of programming in which programs describe their desired results without explicitly listing commands or steps that must be performed. Functional and logical programming languages are characterized by a declarative programming style. In logical programming languages, programs consist of logical statements, and the program executes by searching for proofs of the statements.
In a pure functional language, such as Haskell, all functions are without side effects, and state changes are only represented as functions that transform the state, which is explicitly represented as a first class object in the program. Although pure functional languages are non-imperative, they often provide a facility for describing the effect of a function as a series of steps. Other functional languages, such as LispOCaml and Erlang, support a mixture of procedural and functional programming.
“”” (Wikipedia).

Key People

John McCarthy

McCarthy was a key figure in early AI who coined the term “Artificial Intelligence”.

He”developed the Lisp programming language family, significantly influenced the design of the ALGOL programming language, popularized timesharing, and was very influential in the early development of AI” (Wikipedia).

Links to Smalltalk and the History of OOP

“Lisp deeply influenced Alan Kay, the leader of the research team that developed Smalltalk at Xerox PARC; and in turn Lisp was influenced by Smalltalk, with later dialects adopting object-oriented programming features (inheritance classes, encapsulating instances, message passing, etc.) in the 1970s. The Flavors object system introduced the concept of multiple inheritance and the mixin. The Common Lisp Object System provides multiple inheritance, multimethods with multiple dispatch, and first-class generic functions, yielding a flexible and powerful form of dynamic dispatch.”(Wikipedia).

Example Code

Hello World

Average of Numbers

Using Lambdas (aka Anonymous Functions)

Source: Ref#B

Conditional Logic



back to index


ALGOL (1958)


ALGOL is a high-level language with algebraic style (it’s no longer in current use but influenced languages like Ada and Pascal), its main use was in mathematical work.
The year was 1958, the year NASA was created (with the work of paperclip and other scientists), and the year of the Brussels World’s Fair (pictured below), in the popular charts that year were the Everly Brothers with All I Have To Do Is Dream. The setting was Zurich in Switzerland, specifically ETH Zurich (a well-known STEM university) in Switzerland, and the occasion was the Zurich ACM-GAMM Conference. In a joint project of the ACM (Association for Computing Machinery) and the GAMM (Association for Applied Mathematics and Mechanics) the proposed International Algebraic Language was approved.

Image result for Brussels World's Fair
Notable features of IAL included compound statements; IAL was “intended to provide convenient and concise means for expressing virtually all procedures of numerical computation while employing relatively few syntactical rules and statement types”(Ref#: A).
“The first ALGOL 58 compiler was completed by the end of 1958 by Friedrich L. Bauer, Hermann Bottenbruch, Heinz Rutishauser, and Klaus Samelson for the Z22 computer. Bauer et al. had been working on compiler technology for the Sequentielle Formelübersetzung (i.e. sequential formula translation) in the previous years”(Source: Wikipedia).

Key People

John (J. W.) Backus a programming language designer at IBM.

Peter Naur danish computer scientist “dataologist” and a Turing Award winner, known for the Backus–Naur form (along with John Backus of ALGOL fame)he contributed to the creation of the ALGOL 60 programming language.

There are three flavors of ALGOL which take their names from their years of instantiation: 


ALGOL 58, originally known as IAL, is one of the family of ALGOL computer programming languages. It was an early compromise design soon superseded by ALGOL 60.”


ALGOL 60 […] followed on from ALGOL 58 which had introduced code blocks and the begin and end pairs for delimiting them. ALGOL 60 was the first language implementing nested function definitions with lexical scope. It gave rise to many other programming languages, including CPL, Simula, BCPL, B, Pascal and C.”
Code Example (ICT 1900 series variety)

For a variety of reasons, ALGOL 60 was never widely used, and other languages of the time were generally preferred.


ALGOL 68 was designed to be a successor to the ALGOL 60 programming language, with the goal of expanding its scope of application and providing a more precisely defined syntax and semantics. The language’s definition, however, was highly complex, spanning hundreds of pages with unconventional terminology, making compiler implementation a challenge. As a result, ALGOL 68 was said to have “no implementations and no users”, which was only partially accurate. Despite this, the language found limited use in certain niche markets, such as in the United Kingdom where it was widely used on ICL machines, and as a tool for teaching computer science. However, outside of these communities, its usage was relatively limited.

ICL (International Computers Limited) was a UK-based computer company that used ALGOL 68 as one of the primary programming languages on its mainframe computers in the 1970s and 1980s. ICL machines were widely used in government, academic, and research institutions in the UK and ALGOL 68 was well-suited to the high-level mathematical and scientific computing tasks that were common on these systems. The language’s support for complex data structures and its high-level, expressive syntax made it popular among ICL users for a variety of applications, including scientific simulations, data analysis, and numerical computations.

The use of ALGOL 68 on ICL machines was significant because it helped establish the language as a viable option for scientific computing, despite its reputation for being difficult to implement. The combination of ICL’s hardware and the capabilities of ALGOL 68 made the company a leader in the field of high-performance computing in the UK during this time period. Although the popularity of ALGOL 68 eventually declined with the rise of more widespread programming languages such as C and FORTRAN, its use on ICL machines helped to establish the language as an important part of the history of computer science.



JOVIAL, short for Jules’ Own Version of the International Algebraic Language (developed by a guy called Jules Schwartz), is a high-level programming language that was developed by a team at System Development Corporation (SDC) in the late 1950s. JOVIAL was designed specifically for applications in military aircraft control systems and was commissioned by the United States Air Force (USAF). The language was developed with the goal of being a robust and reliable tool for composing software for the electronics of military aircraft, making it well-suited for use in mission-critical scenarios.

JOVIAL was developed as a “high-order” programming language, meaning that it was designed to allow for the creation of high-level abstractions and was optimized for readability and maintainability. This made JOVIAL a popular choice for aircraft control systems, where the software needed to be both safe and dependable. JOVIAL systems were used for many years in actual air traffic control systems, making them a critical component of the infrastructure that supported military aviation.

In this way, JOVIAL can be compared to the modern programming language ADA, which was also designed for use in critical systems and is widely used in safety-critical industries such as aerospace, defense, and transportation. The development of JOVIAL was a major step forward in the evolution of high-level programming languages and its continued use in aircraft control systems is a testament to its robustness and reliability.



back to index


COBOL (1959)

In April 1959, Mary K. Hawes, who had identified the need for a common business language in accounting, called a meeting of representatives from academia, computer users, and manufacturers at the University of Pennsylvania. The meeting focused on strategies to get agreement on a common business computer language.


Photo of Admiral Grace Hopper (she was involde in guiding the project)

Mary Hawes, a Burroughs Corporation programmer, called in March 1959 for computer users and manufacturers to create a new computer language—one that could run on different brands of computers and perform accounting tasks such as payroll calculations, inventory control, and records of credits and debits” (Ref#: A).
Representatives present that day included Grace Hopper, Jean Sammet, and Saul Gorn.
What they eventually would come up with was COBOL or the “COmmon, Business-Oriented Language”.  Yet, in fact, the First Draft of COBOL, was produced in November 1959.

“During 1959 the first plans for the computer language COBOL emerged as a result of meetings of several committees and subcommittees of programmers from American business and government. This heavily annotated typescript was prepared during a special meeting of the language subcommittee of the Short-Range Committee held in New York City in November. COBOL programs would actually run the following summer, and the same program was successfully tested on computers of two different manufacturers in December 1960.”

COBOL Example Code




back to index

test edit is saving 060220

Simula (1964-67)

In tracing the evolution of Object-Oriented Programming (OOP) languages, many believe Simula is an important milestone. Developed in the 1960s at the Norwegian Computing Center in Oslo by Ole-Johan Dahl and Kristen Nygaard, Simula encompasses two iterations: Simula I and Simula 67. Syntactically, it stands as a comprehensive superset of ALGOL 60, while also bearing influences from Simscript.

Originally conceived as a tool for discrete event simulation, Simula underwent subsequent expansion and reimplementation to evolve into a robust general-purpose programming language. Simula 67 notably introduced several groundbreaking concepts to the programming world. These include objects, classes, inheritance, subclasses, virtual procedures, coroutines, discrete event simulation mechanisms, and the incorporation of garbage collection. Additionally, it pioneered various forms of subtyping beyond merely inheriting subclasses.

As its name suggests, Simula was designed for doing simulations, and the needs of that domain provided the framework for many of the features of object-oriented languages today.

Simula’s versatility is manifest in its diverse applications. It has been employed in simulating VLSI designs, process modeling, protocol development, algorithmic studies, as well as in typesetting, computer graphics, and educational contexts. Despite its foundational significance, the influence of Simula is occasionally overlooked. Nevertheless, its concepts have been reimagined and incorporated into subsequent languages like C++, Object Pascal, Java, and C#. Esteemed computer scientists, including Bjarne Stroustrup, the progenitor of C++, and James Gosling, the architect of Java, have publicly recognized Simula’s seminal influence on their work (Wikipedia).

Example Code


“A central new concept in SIMULA 67 is the “object”. An object is a self-contained program (block instance), having its own local data and actions defined by a “class declaration”. The class declaration defines a program (data and action) pattern, and objects conforming to that pattern are said to “belong to the same class””(Ref#: B).

Simula introduced the concept of Classes which was later picked up by a lot of future OO Programming Languages.




back to index


Pascal (1968/70)

Pascal is an imperative and procedural programming language, which Niklaus Wirth designed in 1968–69 and published in 1970, as a small, efficient language intended to encourage good programming practices using structured programming and data structuring. It is named in honor of the French mathematician, philosopher and physicist Blaise Pascal. “”
Key People

Related image
Niklaus Wirth

“From 1963 to 1967 he served as assistant professor of Computer Science at Stanford University and again at the University of Zurich. Then in 1968 he became Professor of Informatics at ETH Zürich, taking two one-year sabbaticals at Xerox PARC in California (1976–1977 and 1984–1985). Wirth retired in 1999″ (Wikipedia).

Example Pascal Code

back to index


Prolog (~1972)

“Prolog is a logic programming language associated with artificial intelligence and computational linguistics.
Prolog has its roots in first-order logic, a formal logic, and unlike many other programming languages, Prolog is intended primarily as a declarative programming language: the program logic is expressed in terms of relations, represented as facts and rules. A computation is initiated by running a query over these relations.

Key People

The language was first conceived by Alain Colmerauer and his group in Marseille, France, in the early 1970s and the first Prolog system was developed in 1972 by Colmerauer with Philippe Roussel.

Prolog was one of the first logic programming languages, and remains the most popular among such languages today, with several free and commercial implementations available. The language has been used for theorem proving, expert systems, term rewriting, type systems, and automated planning, as well as its original intended field of use, natural language processing. Modern Prolog environments support the creation of graphical user interfaces, as well as administrative and networked applications.

Prolog is well-suited for specific tasks that benefit from rule-based logical queries such as searching databases, voice control systems, and filling templates” ().

back to index


The C Programming Language (1972)

C is a general-purposeimperative computer programming language, supporting structured programminglexical variable scope and recursion, while a static type system prevents many unintended operations”(Wikipedia). “The C programming language was devised in the early 1970s as a system implementation language for the nascent Unix operating system. Derived from the typeless language BCPL, it evolved a type structure; created on a tiny machine as a tool to improve a meager programming environment, it has become one of the dominant languages of today” (Ref#: G).

The C language is considered low-level since it allows (indeed it sometimes requires) for manual memory management. As such there’s no built-in garbage-collection mechanism, however, on the plus side, it can allow for better optimizations being closer to machine code (the language that a given processor speaks). It’s used for systems and general programming due to it being typically fast and efficient.

Key People 

Dennis Ritchie
Ritchie worked for Bell Labs (AT&T) in the 1960s, and along with several other employees of Bell Labs (AT&T), on a project called Multics (this was conceived as a time-sharing operating system), originally meant to be a contract to fulfill a need from General Electric. The Multics C compiler was primarily developed to facilitate the porting of third-party software to Multics. In 1969 AT&T (Bell Labs) withdrew from the project because the project could not produce an economically useful system.
Image result for Kernighan C programming
Brian Kernighan worked with Ritchie and Thompson at Bell Labs. Following on from their work at Bell Labs a book was created called “The C Programming Language, 1st edition”. Written by Brian Kernighan and Dennie Ritchie it became the classic text of the language (Ref#: A), although Brian clarified that he was no the creator of the C language but rather a fan of it.


  • Portable
  • Powerful
  • Fast and Efficient
  • Modularity
  • Platform Dependent
  • Use of pointers
  • Middle Level
  • Rich Library
  • Extensible
  • Structure
  • Supports recursion

It is a robust language with a rich set of built-in functions and operators that can be used to write any complex program. The C compiler combines the capabilities of an assembly language alongside some of the features of a high-level language. Programs written in C are efficient and fast due to its variety of data types and powerful operators. A C program is basically a collection of functions that are supported by C library. We can also create our own function and add it to C library. C language is the most widely used language in operating systems and embedded system development today.

Links to The Unix Operating System
Unix was written in C programming language. with both UNIX and the C programming language being developed by AT&T / Bell Labs. In fact, the UNIX project was started in 1969 by Ken Thompson and Dennis Ritchie.

Code Examples

Hello World Example

Another Example



I: Brian Kernighan: UNIX, C, AWK, AMPL, and Go Programming | AI Podcast #109 with Lex Fridman [Internet Video]. Sourced from on 20th July 2020.

back to index

ML (1973)

ML is a functional programming language which was developed by Robin Milner and others at the Edinburgh Laboratory for Computer Science in Scotland. The purpose of ML was to create a language that was a better “theorem prover” than Lisp, where Lisp was found by Milner to often make mistakes when used for this function.

“ML (“Meta Language”) is a general-purpose functional programming language. It has roots in Lisp, and has been characterized as “Lisp with types“.[citation needed] ML is a statically-scoped functional programming language like Scheme. It is known for its use of the polymorphic Hindley–Milner type system, which automatically assigns the types of most expressions without requiring explicit type annotations, and ensures type safety – there is a formal proof that a well-typed ML program does not cause runtime type errors. ML provides pattern matching for function arguments, garbage collection, imperative programming, call-by-value and currying. It is used heavily in programming language research and is one of the few languages to be completely specified and verified using formal semantics. Its types and pattern matching make it well-suited and commonly used to operate on other formal languages, such as in compiler writing, automated theorem proving, and formal verification” (Ref#: A).

Aside on Hindley-Milner Type Inference

The “Hindley–Milner (HMtype system is a classical type system for the lambda calculus with parametric polymorphism. It is also known as Damas–Milner or Damas–Hindley–Milner. … Luis Damas contributed a close formal analysis and proof of the method in his PhD thesis” (Wikipedia).
“HM has been rediscovered many times by many people. Curry used it informally in the 1950’s (perhaps even the 1930’s). He wrote it up formally in 1967 (published 1969). Hindley discovered it independently in 1969; Morris in 1968; and Milner in 1978. In the realm of logic, similar ideas go back perhaps as far as Tarski in the 1920’s”(Ref#: B).

“Among HM’s more notable properties are its completeness and its ability to infer the most general type of a given program without programmer-supplied type annotations or other hints. Algorithm W is an efficient type inference method that performs in almost linear time with respect to the size of the source, making it practically useful to type large programs. HM is preferably used for functional languages. It was first implemented as part of the type system of the programming language ML. Since then, HM has been extended in various ways, most notably with type class constraints like those in Haskell.” (Wikipedia)
The algorithm in question looks to infer value types based on use. It formalizes the intuition that a type can be deduced by looking at the functionality it supports.

Algorithm W is an efficient type inference algorithm that is used to deduce the type of variables in the Hindley-Milner (HM) type system. The algorithm operates in almost linear time with respect to the size of the source code, making it a practical solution for typing large programs.

The algorithm is an important part of the HM type system, as it allows the type system to infer the most general type of a given program without the need for programmer-supplied type annotations or hints. This makes the HM type system a powerful tool for functional programming, where a strong emphasis is placed on type correctness and type inference.

Algorithm W is named after its inventor, Robin Milner, who first described the algorithm in the late 1970s. The algorithm is considered to be one of the seminal contributions to the field of type theory and type inference in computer science.

Standard ML

SML or Standard ML is a modern dialect of the ML language.

Here is an extended set of Standard ML Code:

More Examples Here:




ADA (~1978/79 onwards)

ADA is a high-level language whose main use is in defense applications.

As the sun set on the late 1970s, the world was swept away by the “sweet” melodies of the Bee Gees’ hit song “How Deep is Your Love”. Meanwhile, in a small corner of the computer science world, a group of passionate engineers were deeply devoted to a different kind of love – the love of solving complex programming problems.

The United States Department of Defense had issued a challenge to the world, seeking a new language to tackle its legacy code issues and other programming difficulties. It was in this arena that the team of computer enthusiasts found their calling, and poured their hearts into creating a language that would rise to the top.

One of the competition entries caught the eye of the judges, a language that was elegant, robust, and designed specifically for mission-critical systems. And so, the love affair between ADA and the DOD began, a relationship that would endure for decades to come, as ADA became the programming language of choice for some of their key systems.

It had started with a group of dedicated computer scientists in France who had been busy at work, tasked with solving a critical challenge faced by the United States Department of Defense. The DOD, burdened with a vast array of over 450 programming languages, sought to streamline its system and find a single, stable and type-safe solution.

The call for a new programming language was answered by a team led by the brilliant French computer scientist Jean Ichbiah of CII Honeywell Bull, a company with roots dating back to 1931. Under contract with the DoD, Ichbiah and his team worked tirelessly to create a language that would meet the needs of the military, culminating in the proposed language, “Green”.

Their hard work paid off as “Green” emerged victorious in the DOD competition, and was eventually named Ada, in honor of Ada Lovelace, the pioneering “computer programmer” who lived in the 19th century.

ADA creates solutions to many of the same tasks as could do implemented in the likes of C or C++, however, it has one of the best type-safety systems available in a statically typed programming language – making it “safer”. In 1987, the DOD began to require the use of Ada for every software project where new code was going to make up more than 30% of the project, though exceptions to this rule were often granted.   In 1997, the DoD Ada mandate was effectively removed as the DoD began to embrace more commercial off-the-shelf technology as opposed to always developing custom solutions in each use case.

ADA went on to be used in a number of safety-critical systems, including everything Boeing jetliners to missile systems. So that lack of “crashability” of programs written in ADA (due to it being strongly-typed) was naturally super important to clients who wanted to use it in these critical systems.

Key People 

Jean Ichbiah (1940 – 2007) was French with Jewish origins having been descended from Greek and Turkish Jews from the area of Thessaloniki who had previously emigrated to France.

“Ichbiah’s team submitted a language design labeled “Green” to a competition to choose the United States Department of Defense’s embedded programming language. When Green was selected in 1978, he continued as chief designer of the language, now named “Ada”. In 1980, Ichbiah left CII-HB and founded the Alsys corporation in La Celle-Saint-Cloud, which continued language definition to standardize Ada 83, and later went into the Ada compiler business, also supplying special validated compiler systems to NASA, the US Army, and others. He later moved to the Waltham, Massachusetts subsidiary of Alsys” (Ref#: A).

Code Example (ADA 95)

Source: Ref C

Polymorphism in Ada

Prior to ADA 95, there were some aspects of object-oriented languages that Ada did not explicitly support out of the box, including inheritance and polymorphism, but it was still possible to implement these in the language by adding extra code  (

“The full power of object orientation is realized by polymorphism, class-wide programming and dynamic dispatching…” (Ref#: D).

“In 1995 facilities were added to Ada to easily support inheritance. Inheritance lets us define new types as extensions of existing types; these new types inherit all the operations of the types they extend.”

[TODO: Add example code demonstrating this.]

ADA 95 Example





back to index


C++ / “C Plus Plus” (1979)

7099f1.jpgScreen Shot 2018-02-20 at 22.35.43.png

To explore C++ we must travel back, back further indeed through some sort of Psychedelic wormhole, floating then all the way back to the 1970s. Amongst the youth of the day the fashion was polyester, bright colors, flares, tight-fitting pants, and platform shoes.

Then to 1979, Britain’s first female prime minister Margaret Thatcher was elected, cult TV series Tales of the Unexpected began to show, and in the British pop charts was everything from the Village People (YMCA), to The Police (as the culture that became the 1980s took hold).

Meanwhile, in Cambridge, a dude called Bjarne Stroustrup was up to something, something he called “C with Classes”, in fact, this danish computer scientist had come up with what was to turn into C++ as we know it today. Stroustrup recalls that “C++ was designed to provide Simula’s facilities for program organization together with C’s efficiency and flexibility for systems programming. ” (

The below image is probably the computer Stroustrup used at the time (although not with him), an IBM 370/165 which was installed at the Cambridge University Computing Service in 1971 (so I believe this is likely what he used), although he calls this an “IBM 360/165” in a document he published (C++Ref#A). 


Apparently the genesis of the ideas that were to lead to the creation of C++ occurred to Stroustrup during his Ph.D at Cambridge University apparently from a sort of side product to work for his thesis, a type of compiler written in the Simula programing language (Simula was a language developed in the mid-60s and based on ALGOL but with lots of extra features, designed for simulation, it is often seen as perhaps the first OO language). (C++Ref#:B)
Now, this dude loved the features of Simula but he thought that it would be super cool if he could combine some of the structure of Simula into maybe a faster C based programming language, as he also loved the low-level micro-code machine code level.

Example C++ Code



Stroustrup realized that you needed a strong type system to make a language work well, but whilst he found the type systems of other languages frustrating, the ability to build our your own type system (or “classes”), as found in Simula, appealed to him.
He targeted the idea of bringing the language which is reasonably understandable by most people in the areas reached, but without losing the speed and efficiency of the raw mathematical fundamentals appealing.
He wanted to be able to make his own types based on the problem he was solving (as in Simula) so he brought this kind of OO concept into C++. He wanted the ability to tap into the strengths of things like run-time polymorphism in solving tasks.


C++, because it’s rooted on C is really considered an efficient language for real-world large-scale deployments.
It’s a language that’s great for working with low-level hardware efficiently whilst also offering great tools for abstraction. It also derived OO concepts from Simula like with the virtual functions in Simula for inheritance, which was replicated into C++ using a jump-table, although the C++ version was simpler and faster such that the overhead of the OO inheritance-based computations was less when compiling though its original intermediate stage of optimized C code.

back to index


Smalltalk (1971-80)

This language is one of the root languages that lead to modern OOP programing.
Smalltalk is an object-oriented, dynamically typed reflective programming language”.
The language was principally designed and created in part for educational use, “more so for constructionist learning, at the Learning Research Group (LRG) of Xerox PARC by Alan Kay, Dan Ingalls, Adele Goldberg, Ted Kaehler, Scott Wallace, and others during the 1970s”.

Smalltalk is often used to refer to the Smalltalk-80 Programming Language, this is perhaps the first version to be made publicly available and it was created in 1980.
“Smalltalk was the product of research led by Alan Kay at Xerox Palo Alto Research Center (PARC); Alan Kay designed most of the early Smalltalk versions, Adele Goldberg wrote most of the documentation, and Dan Ingalls implemented most of the early versions” (Wikipedia).

“The first version, termed Smalltalk-71, was created by Kay in a few mornings on a bet that a programming language based on the idea of message passing inspired by Simula could be implemented in “a page of code”. A later variant used for research work is now termed Smalltalk-72 and influenced the development of the Actor model. Its syntax and execution model were very different from modern Smalltalk variants” (Wikipedia).

“Smalltalk-80 was the first language variant made available outside of PARC, first as Smalltalk-80 Version 1, given to a small number of firms (Hewlett-Packard, Apple Computer, Tektronix, and Digital Equipment Corporation (DEC)) and universities (UC Berkeley) for peer review and implementing on their platforms. Later (in 1983) a general availability implementation, named Smalltalk-80 Version 2, was released as an image (platform-independent file with object definitions) and a virtual machine specification. ANSI Smalltalk has been the standard language reference since 1998″ (Wikipedia).

“…The sixties, particularly in the ARPA community, gave rise to a host of notions about “human-computer symbiosis” through interactive time-shared computers, graphics screens and pointing devices. Advanced computer languages were invented to simulate complex systems such as oil refineries and semi-intelligent behavior. The soon to follow paradigm shift of modern personal computing, overlapping window interfaces, and object-oriented design came from seeing the work of the sixties as something more than a “better old thing”. That is, more than a better way: to do mainframe computing; for end-users to invoke functionality; to make data structures more abstract. Instead the promise of exponential growth in computing/$/volume demanded that the sixties be regarded as “almost a new thing” and to find out what the actual “new things” might be. For example, one would compute with a handheld “Dynabook” in a way that would not be possible on a shared mainframe; millions of potential users meant that the user interface would have to become a learning environment along the lines of Montessori and Bruner; and needs for large scope, reduction in complexity, and end-user literacy would require that data and control structures be done away with in favor of a more biological scheme of protected universal cells interacting only through messages that could mimic any desired behaviorEarly Smalltalk was the first complete realization of these new points of view as parented by its many predecessors in hardware, language and user interface design. It became the exemplar of the new computing, in part, because we were actually trying for a qualitative shift in belief structures—a new Kuhnian paradigm in the same spirit as the invention of the printing press—and thus took highly extreme positions which almost forced these new styles to be invented” (summarized abstract from Kay. A.C. (1993). The Early History of Smalltalk).

Smalltalk was one of many Object-Oriented programming languages based on Simula.
A Smalltalk object can do exactly three things:

  1. Hold state (references to other objects).
  2. Receive a message from itself or another object.
  3. In the course of processing a message, send messages to itself or another object.

Smalltalk was very influential on subsequent programming languages including Objective-C, Java, Python, and Ruby. Smalltalk was actually a kind of side product of a lot of wider ARPA funded research that was done by this team.

Code Example




back to index

Objective-C (1980s)

This programming language dates back to the 1980s with a company called NeXT. The language took C and added in some Smalltalk like messaging elements. It went on to be used by Apple when Apple took over some of the work, slightly before but kinda around the same time that Steve Jobs moved back over to working for Apple and to again lead it in a new direction.
Key People

Some people behind this language were a couple of gentlemen called Brad Cox and Tom Love, initially linked to a company called Stepstone (Productivity Products International), the rights were then acquired in ’95 by NeXT computer, and subsequently, the rights were transferred again, this time to Apple who would go on to popularize the language primarily through the release development framework for their operating systems to a global network of developers.

NeXT logo.svgTom Love

How it Developed
Drs Cox and Love had learned Smalltalk while at ITT Corporation’s Programming Technology Center in 1981. “Tom Love was the Director of the Advanced Technology Group at the Programming Technology Center(PTC), and he hired Brad Cox into that group at ITT” (ref#: C). Dr. Cox seemingly thought of what was to become Objective-C based on ideas he discovered in an August 1981 issue of Byte Magazine devoted to the topic of Smalltalk.  This included, for example, an article by Larry Tesler called “The Smalltalk Environment”.

Ideas developed by Cox based on this were seen in a 1983 paper called “The Object-Oriented Precompiler: Programming Smalltalk—80 Methods in C Language”. He referred originally to the language as OOPC.  Following on from his original ideas, a second generation of the language was re-build from the ground up at Schlumberger Research, subsequently, a third version of the language was totally rebooted when Love & Cox worked at Productivity Products in June of 1983 (ref#: C).

“In 1988, NeXT licensed Objective-C from StepStone (the new name of PPI, the owner of the Objective-C trademark) and extended the GCC compiler to support Objective-C. NeXT developed the AppKit and Foundation Kit libraries on which the NeXTSTEP user interface and Interface Builder were based.”().

Key Language Characteristics:
“The Objective-C model of object-oriented programming is based on message passing to object instances. In Objective-C one does not call a method; one sends a message. This is unlike the Simula-style programming model used by C++” (Wikipedia)

Example Code

back to index

SQL “sequel” (70s-1986)

Ah, although its origins lie back into the 70s, SQL was more fully developed in 1986, the year that Kiss by Prince hit the charts. Or, not as much fun, the Soviet Nuclear reactor at Chernobyl exploded (as explored in the so-named Netflix film). SQLs has since become the de facto industry standard for relational systems.
Somewhere else in the world (i.e. IBM aka the home of business computing at the time) is developed “a domain-specific language used in programming and designed for managing data held in a relational database management system (RDBMS), or for stream processing in a relational data stream management system (RDSMS). It is particularly useful in handling structured data where there are relations between different entities/variables of the data. SQL offers two main advantages over older read/write APIs like ISAM or VSAM: first, it introduced the concept of accessing many records with one single command; and second, it eliminates the need to specify how to reach a record, e.g. with or without an index” (Wikipedia).

It’s considered a 4GL or 4th generation language in that it is made to be human-readable.

SQL data retrieval
SQL has four commands for data manipulation

SELECT: for retrieving data

INSERT: for creating data

UPDATE: for altering data

DELETE: for removing data

For example, SELECT typically has the format:

SELECT columns (or ‘*’)
FROM relation(s)
[WHERE constraint(s)] ;

Which, in a simple example, could look like:
WHERE NAME = ‘P Abdul’;

Creating Tables
Creating tables can look like:

Key People
SQL was initially developed at IBM by Donald D. Chamberlin and Raymond F. Boyce after learning about the relational model from Ted Codd in the early 1970s. This version, initially called SEQUEL (Structured English Query Language), was designed to manipulate and retrieve data stored in IBM’s original quasi-relational database management system, System R, which a group at IBM San Jose Research Laboratory had developed during the 1970s

Donald D. Chamberlin

“Donald Chamberlin was born in 1944 in San Jose, California, and holds a B.S. in engineering from Harvey Mudd College (1966) and an M.S. (1967) and Ph.D. (1971) in electrical engineering from Stanford University. Chamberlin is best known as co-inventor of SQL (Structured Query Language), the world’s most widely used database language. Developed in the mid-1970s by Chamberlin and Raymond Boyce, SQL was the first commercially successful language for relational databases. Chamberlin was also one of the managers of IBM’s “System R” project, which produced the first SQL implementation and developed much of IBM’s relational database technology.
Chamberlin joined IBM Research at the T.J. Watson Research Center, Yorktown Heights, New York, in 1971. In 1973, he returned to San Jose, California, and continued his work at IBM’s Almaden Research Center, where he was named an IBM Fellow in 2003. In 2009, he was appointed a Regents’ Professor at UC Santa Cruz.
Chamberlin was named an ACM Fellow in 1994 and an IEEE Fellow in 2007. In 1997, he received the ACM SIGMOD Innovations Award and was elected to the National Academy of Engineering. In 2005, he was given an honorary doctorate by the University of Zurich.” (Ref#:

Raymond F. Boyce

“In the early 1970’s, together with Donald D. Chamberlin he co-developed Structured Query Language (SQL) while managing the Relation Database development group for IBM in San Jose, California. Initially called SEQUEL (Structured English Query Language) and based on their original language called SQUARE (Specifying Queries As Relational Expressions). SEQUEL was designed to manipulate and retrieve data in relational databases. By 1974, he and Chamberlin published “SEQUEL: A Structured English Query Language” which detailed their refinements to SQUARE and introduced us to the data retrieval aspects of SEQUEL. It was one of the first languages to use Edgar F. Codd’s relational model. SEQUEL was later renamed to SQL by dropping the vowels, because SEQUEL was a trademark registered by the Hawker Siddeley aircraft company. Today, SQL has been generally established as the standard relational databases language. In 1974, he and Edgar F. Codd, co-developed the Boyce–Codd normal form (or BCNF). It is a type of normal form that is used in database normalization. The goal of relational database design is to generate a set of database schemas that store information without unnecessary redundancy. Boyce-Codd accomplishes this and allows users to retrieve information easily. Using BCNF, databases will have all redundancy removed based on functional dependencies. It is a slightly stronger version of the third normal form. He died in 1974 as a result of an aneurysm of the brain, leaving behind his wife of almost five years, Sandy, and his daughter Kristin, who was just ten months old.”



History of SQL

  • 1970 – E.F. Codd develops the relational database concept
  • 1974-1979 – System R with Sequel (later called SQL) is created at the IBM Research Lab
  • 1979 – Oracle markets the first relational DB with SQL
  • 1981 – SQL/DS first available RDBMS system on DOS/VSE
  • Others followed: INGRES(1981), IDM(1982), DG/SGL(1984), Sybase(1986)
  • 1986 – The ANSI SQL was released
  • 1989, 1992, 1999, 2003, 2006, 2008 – Major ANSI standard updates
  • Present Day – SQL is supported by most major database vendors

Basic Data Types in SQL

   Character types

char, varchar

   Integer values

integer, smallint

   Decimal numbers

numeric, decimal

   Date data type



“It is not usually very long before a requirement arises to combine information from more than one table, into one coherent query result”.
There are various kinds of Joins but we won’t go into them all in this article, but will provide more details elsewhere.

SQL is best used in running and interacting with your data layer. Whilst it is possible to put business logic into our SQL databases (and sometime Data Base Administrators or DBAs will push for this due to their bias toward it), this practice is best avoided as it is typically best to separate out the application layer from database layer. This does not mean however that we should not take advantage of careful use of things like stor procs (stored procedures) in, for example, our Microsoft SQL Server databases or similar, as this often confers good efficience benefits in terms of time-efficiency ().


Different Flavours of SQL

“Although SQL is an ANSI (American National Standards Institute) standard, there are many different versions of the SQL language.
However, to be compliant with the ANSI standard, they all support at least the major commands (such as SELECT, UPDATE, DELETE, INSERT, WHERE) in a similar manner.
Note: Most of the SQL database programs also have their own proprietary extensions in addition to the SQL standard!”(

Microsoft SQL Server

SQL Server is a relational database management system (RDBMS) developed by Microsoft – Built on top of SQL, it is also tied to Transact-SQL (T-SQL), Microsoft’s own variant of SQL that adds a set of proprietary programming constructs. Its main purpose is as a database server, that is to say for storing and retrieving data requested by other applications, either locally or over a network including the internet.
Microsoft has (in the past) tried to tie down SQL Server to the Windows environment, in a similar way to their attempt to essentially create essentially their own proprietary version of Java in the form of C#, which was also geared towards tying developers to their operating systems. However in 2016, Microsoft made SQL server available on Linux, and it then became generally available in 2016 to run on both Windows and Linux.

SQL Server supports different data types, including primitive types such as Integer, Float, Decimal, Char (including character strings), Varchar (variable length character strings), binary (for unstructured blobs of data), Text (for textual data) among others. The rounding of floats to integers uses either Symmetric Arithmetic Rounding or Symmetric Round Down (fix) depending on arguments: SELECT Round(2.5, 0) gives 3.
Microsoft SQL Server also allows user-defined composite types (UDTs) to be defined and used. It also makes server statistics available as virtual tables and views (called Dynamic Management Views or DMVs). In addition to tables, a database can also contain other objects including views, stored procedures, indexes and constraints, along with a transaction log. A SQL Server database can contain a maximum of 231 objects, and can span multiple OS-level files with a maximum file size of 260 bytes (1 exabyte).The data in the database are stored in primary data files with an extension .mdf. Secondary data files, identified with a .ndf extension, are used to allow the data of a single database to be spread across more than one file, and optionally across more than one file system. Log files are identified with the .ldf extension.

SQL Server Data Types

Data types in SQL Server are organized into the following categories:

Exact numerics Unicode character strings
Approximate numerics Binary strings
Date and time Other data types
Character strings




Transact-SQL or T-SQL is a proprietary extension to SQL developed by Sybase and Microsoft. It expands on the functionality of basic SQL in a variety of ways including bringing in local variables, procedural programming capabilities, and adding various support functions for strings, dates, and math operations. There are also some changes in the use of the DELETE and UPDATE statements in T-SQL.

Features of T-SQL

Temporary Tables

The name of the temporary table starts with a hash symbol (#). For example, the following statement creates a temporary table using the SELECT INTO statement:

Stored Procedures

A Stored Procedure is a piece of prepared SQL code that you can save such that the code can be reused over and over again. You can also pass parameters to a stored procedure so that the stored procedure can act based on the parameter value(s) that are passed to it. We’ve also see Stor Procs in Oracle-based solutions.
“Stored procedures…allow one to move code that enforces business rules from the application to the database. As a result, the code can be stored once for use by different applications. Also, the use of stored procedures can make one’s application code more consistent and easier to maintain. This principle is similar to the good practice in general programming in which common functionality should be coded separately as procedures or functions”
The basic syntax of a stored procedure is as follows:

T-SQL includes support for Stored Procedures which act as executable server-side routines where there is the ability to pass in parameters to these.
Some of the most important advantages of using stored procedures are summarised as follows:

  • Because the processing of complex business rules can be performed within the database, significant performance improvement can be obtained in a networked client-server environment (refer to client-server chapters for more information).
  • Since the procedural code is stored within the database and is fairly static, applications may benefit from the reuse of the same queries within the database. For example, the second time a procedure is executed, the DBMS may be able to take advantage of the parsing that was previously performed, improving the performance of the procedure’s execution.
  • Consolidating business rules within the database means they no longer need to be written into each application, saving time during application creation and simplifying the maintenance process. In other words, there is no need to reinvent the wheel in individual applications, when the rules are available in the form of procedures.



There are various flow control keywords in SQL Server that we can use including…

Source: Ref# U


This new exception handling behaviour was introduced by Microsoft in SQL Server 2005, with the purpose of enabling developers to simplify their code.
It “Implements error handling for Transact-SQL that is similar to the exception handling in the Microsoft Visual C# and Microsoft Visual C++ languages. A group of Transact-SQL statements can be enclosed in a TRY block. If an error occurs in the TRY block, control is passed to another group of statements that is enclosed in a CATCH block”.


“SQLite is an open source embedded relational database management system or RDBMS contained in a C programming library. Relational database systems are used to store data in large tables.
In contrast to other popular RDBMS products like Oracle Database, IBM’s DB2, and Microsoft’s SQL Server, SQLite does not require any administrative overhead or any setup complexity.
As the other databases are working as a standalone process, SQLite is not working as a standalone process. You have to link it with your application statically or dynamically.
SQLite uses dynamically and weakly typed SQL for column. It means you can store any value in any column, regardless of the data type. SQLite implements most of the SQL92 standard” (Ref#: F).

Features of SQLite

  • Serverless
  • Zero Configuration
  • Cross-Platform
  • Self-Contained (A single library contains the entire database system, which integrates directly into a host application).
  • Transactional (ACID-compliant – all queries are Atomic, Consistent, Isolated, and Durable).
  • Light-weight
  • Familiar language
  • Highly Reliable

Default Constraint in SQLite


Then when we don’t provide all the values, the DEFAULT one will be automatically populated for us:

Playing With sqlite3
To open sqlite3 from our mac Terminal (or iTerm) we can just type sqlite3 (assuming it’s installed)

back to index

Perl (~1987)

The year was 1987, the year that ‘The Simpsons’ made its first appearance (as part of the Tracy Ullman Show), and Whitney Houston hit the charts with So Emotional, with MJ and U2 also popular artists who had hits that year.


Perl is a family of two high-level, general-purpose, interpreted, dynamic programming languages. “Perl” refers to Perl 5, but from 2000 to 2019 it also referred to its redesigned “sister language”, Perl 6, before the latter’s name was officially changed to Raku in October 2019.

Though Perl is not officially an acronym, there are various backronyms in use, including “Practical Extraction and Reporting Language”. Perl was originally developed by Larry Wall in 1987 as a general-purpose Unix scripting language to make report processing easier. Since then, it has undergone many changes and revisions. Raku, which began as a redesign of Perl 5 in 2000, eventually evolved into a separate language. Both languages continue to be developed independently by different development teams and liberally borrow ideas from one another.

The Perl languages borrow features from other programming languages including C, shell script (sh), AWK, and sed; Wall also alludes to BASIC and Lisp in the introduction to Learning Perl (Schwartz & Christiansen) and so on. They provide text processing facilities without the arbitrary data-length limits of many contemporary Unix command line tools, facilitating manipulation of text files. Perl 5 gained widespread popularity in the late 1990s as a CGI scripting language, in part due to its unsurpassed regular expression and string parsing abilities.

In addition to CGI, Perl 5 is used for system administration, network programming, finance, bioinformatics, and other applications, such as for GUIs. It has been nicknamed “the Swiss Army chainsaw of scripting languages” because of its flexibility and power, and also its ugliness. In 1998, it was also referred to as the “duct tape that holds the Internet together,” in reference to both its ubiquitous use as a glue language and its perceived inelegance.


Key People
In was in that same year that a dude called Larry Wall who worked at a company call Unisys decided to develop Perl; Mr. Wall’s aim with Perl was to make a Unix scripting language in order to make report processing easier to do. He decided to post Perl to the `comp.sources’ Usenet newsgroup in late 1987.

Related image

Example Perl Code

“[Perl] has undergone many changes and revisions. Perl 6, which began as a redesign of Perl 5 in 2000, eventually evolved into a separate language. Both languages continue to be developed independently by different development teams and liberally borrow ideas from one another” (Wikipedia).

back to index


Erlang is a general-purpose, concurrent, functional programming language, and a garbage-collected runtime system. The term Erlang is used interchangeably with Erlang/OTP, or Open Telecom Platform (OTP), which consists of the Erlang runtime system, several ready-to-use components (OTP) mainly written in Erlang, and a set of design principles for Erlang programs.
The Erlang runtime system is designed for systems with these traits:

  • Distributed
  • Fault-tolerant
  • Soft real-time
  • Highly available, non-stop applications
  • Hot swapping, where code can be changed without stopping a system.

Erlang does not encourage defensive programming, which leads to smaller programs.
Processes are based on immutable state.
These are used where individual modules crash for monitoring and responding to such events.
B: Erlang Programming Language – Computerphile

VIDEO: retrieved from on 18th Dev 2019

back to index

Python (1980s)

Python is a general-purpose language dating back to the 1980s, and so relatively new as programming languages go. Python aims to be as readable as possible and is thus close to English in many ways with a limited set of build-in syntax. It uses indentation instead of, for example, things like curly brackets to denote scope and delineate the contents of functions and classes.
Python programmers conform to the PEP-8 style guide, so this means code should be readable no matter which programmer created it, as long as they make things meaningful.
Being a small language it is found on embedded devices and on servers. It works well with HTTP as well.
Python is in heavy use for scientific computing and there are a number of libraries that are built for us to use for these purposes. For example EarthPy
Key People
Guido van Rossum

Dutch programming dude Mr van Rossum started making Python in 1989. It was the year “Two Hearts” by Phil Collins, and “The Living Years” by Mike & The Mechanics were in the pop charts. More importantly perhaps, in November of that year, the Berlin Wall started to come down representing the start of the fall of the so-called Iron Curtain.
Meaning while van Rossum started on Python with the aim was to create “a descendant of ABC that would appeal to Unix/C hackers”. He felt that ABC was a prisoner to its syntax, and its design as a teaching language, and wanted to free its feature set in nicer syntactical format.
Example Code

Python uses indentation to denote scope:

Python is an interpreted language, but current versions also use a bytecode compilation step.

Versions of Python


Python was conceived in the late 1980s as a successor to the ABC language. Python 2.0, released in 2000, introduced features like list comprehensions and a garbage collection system with reference counting.

Python 3.0, released in 2008, was a major revision of the language that is not completely backward-compatible, and much Python 2 code does not run unmodified on Python 3.

“”” (Ref#: B)





back to index

Haskell (1990/2010)

Haskell has been described as a “modern lisp”; it is a functional programming language and it’s also called a ” lazy functional language” meaning that it allows for lazy evaluation which “makes it practical to modularize a program as a generator that constructs a large number of possible answers, and a selector that chooses the appropriate one”. It’s also described as a “general-purpose compiled purely functional programming language”.
Named after Haskell Brooks Curry and incorporating the principles of the lambda calculus.
“Its main implementation is the Glasgow Haskell Compiler.” GHC having originally begun in 1989 as a prototype, written in LML (Lazy ML) by Kevin Hammond at the University of Glasgow.
“GHC proper was begun in the autumn of 1989, by a team consisting initially of Cordelia Hall, Will Partain, and Peyton Jones. It was designed from the ground up as a complete implementation of Haskell in Haskell, bootstrapped via the prototype compiler. The only part that was shared with the prototype was the parser, which at that stage was still written in Yacc and C. The first beta release was on 1 April 1991 (the date was no accident), but it was another 18 months before the first full release (version 0.10) was made in December 1992. This version of GHC already supported several extensions to Haskell: monadic I/O (which only made it officially into Haskell in 1996), mutable arrays, unboxed data types (Peyton Jones and Launchbury, 1991), and a novel system for space and time profiling (Sansom and Peyton Jones, 1995). A subsequent release (July 1993) added a strictness analyser” (Ref#: A).



“…monads are one of the most distinctive language design features in Haskell. Monads were not in the original Haskell design, because when Haskell was born a “monad” was an obscure feature of category theory whose implications for programming were largely unrecognised”(Ref#: A).

“A monad is a design pattern that allows structuring programs generically while automating away boilerplate code needed by the program logic. Monads achieve this by providing their own data type, which represents a specific form of computation, along with one procedure to wrap values of any basic type within the monad (yielding a monadic value) and another to compose functions that output monadic values (called monadic functions).
This allows monads to simplify a wide range of problems, like handling potential undefined values (with the Maybe monad), or keeping values within a flexible, well-formed list (using the List monad). With a monad, a programmer can turn a complicated sequence of functions into a succinct pipeline that abstracts away auxiliary data management, control flow, or side-effects.
Both the concept of a monad and the term originally come from category theory, where a monad is defined as a functor (a map between categories – where a category or abstract category can be defined to be is a collection of “objects” that are linked by “arrows”. A category has two basic properties: the ability to compose the arrows associatively and the existence of an identity arrow for each object) with additional structure. Research beginning in the late 1980s and early 1990s established that monads could bring seemingly disparate computer-science problems under a unified, functional model. Category theory also provides a few formal requirements, known as the monad laws, which should be satisfied by any monad and can be used to verify monadic code.
Since monads make semantics explicit for a kind of computation, they can also be used to implement convenient language features. Some languages, such as Haskell, even offer pre-built definitions in their core libraries for the general monad structure and common instances” (Wikipedia).

Haskell is noted for its ability to be “concise and articulate”,

Example Haskell Code

There’s a whole bunch of stuff you can do in Haskell, see the reference code here:
B: A Crash Course in Category Theory – Bartosz Milewski [Video]. Retrieved from – on 27th Nov 19.

back to index


Visual Basic (1991-)

Visual Basic 1.0 was introduced in 1991. The drag and drop design for creating the user interface is derived from a prototype form generator developed by Alan Cooper and his company called Tripod. Microsoft contracted with Cooper and his associates to develop Tripod into a programmable form system for Windows 3.0, under the code name Ruby (no relation to the later Ruby programming language). Tripod did not include a programming language at all. Microsoft decided to combine Ruby with the Basic language to create Visual Basic. The Ruby interface generator provided the “visual” part of Visual Basic, and this was combined with the “EB” Embedded BASIC engine designed for Microsoft’s abandoned “Omega” database system. Ruby also provided the ability to load dynamic link libraries containing additional controls (then called “gizmos”), which later became the VBX interface.

Key People

Alan Cooper


Visual Basic is a third-generation event-driven programming language from Microsoft known for its Component Object Model (COM) programming model first released in 1991 and declared legacy during 2008. Microsoft intended Visual Basic to be relatively easy to learn and use. Visual Basic was derived from BASIC and enables the rapid application development (RAD) of graphical user interface (GUI) applications, access to databases using Data Access Objects, Remote Data Objects, or ActiveX Data Objects, and creation of ActiveX controls and objects.

(Source: Wikipedia)

The last version of old-school Visual Basic is VB6.

Example Code

Second Example (VB6)




“Ruby is an interpreted, high-level, general-purpose programming language. It was designed and developed in the mid-1990s by Yukihiro “Matz” Matsumoto in Japan.”

Use Cases

Example Code



Source: Wikipedia


Ruby is an interesting language and was previously used more widely with web back end stuff such as when it was used to write the framework “Ruby on Rails”.


Ruby on Rails, or Rails, is a server-side web application framework written in Ruby under the MIT License. Rails is a model–view–controller (MVC) framework, providing default structures for a database, a web service, and web pages. It encourages and facilitates the use of web standards such as JSON or XML for data transfer, HTML, CSS and JavaScript for user interfacing. In addition to MVC, Rails emphasizes the use of other well-known software engineering patterns and paradigms, including convention over configuration (CoC), don’t repeat yourself (DRY), and the active record pattern.

Ruby on Rails’ emergence in the 2000s greatly influenced web app development, through innovative features such as seamless database table creations, migrations, and scaffolding of views to enable rapid application development. Ruby on Rails’ influence on other web frameworks remains apparent today, with many frameworks in other languages borrowing its ideas, including Django in PythonCatalyst in PerlLaravel and CakePHP in PHPPhoenix in ElixirPlay in Scala, and Sails.js in Node.js.

“”” (Wikipedia)


PHP (1994/1995)

PHP: Hypertext Preprocessor (or simply PHP) is a general-purpose programming language originally designed for web development. It was originally created by Rasmus Lerdorf in 1994; the PHP reference implementation is now produced by The PHP GroupPHP originally stood for Personal Home Page, but it now stands for the recursive initialism PHP: Hypertext Preprocessor.

Key People
Rasmus Lerdorf

Andi Gutmans

Zeev Suraski

PHP is an “HTML-embedded scripting language” primarily used for dynamic Web applications. The first part of this definition means that PHP code can be interspersed with HTML, making it simple to generate dynamic pieces of Web pages on the fly. As a scripting language, PHP code requires the presence of the PHP processor. PHP code is normally run in plain-text scripts that will only run on PHP-enabled computers (conversely programming languages can create standalone binary executable files, a.k.a. programs). PHP takes most of its syntax from C, Java, and Perl. It is an open source technology and runs on most operating systems and with most Web servers. PHP was written in the C programming language by Rasmus Lerdorf in 1994 for use in monitoring his online resume and related personal information. For this reason, PHP originally stood for “Personal Home Page”. Lerdorf combined PHP with his own Form Interpreter, releasing the combination publicly as PHP/FI (generally referred to as PHP 2.0) on June 8, 1995. Two programmers, Zeev Suraski and Andi Gutmans, rebuilt PHP’s core, releasing the updated result as PHP/FI 2 in 1997. The acronym was formally changed to PHP: HyperText Preprocessor, at this time. (This is an example of a recursive acronym: where the acronym itself is in its own definition.) In 1998, PHP 3 was released, which was the first widely used version. PHP 4 was released in May 2000, with a new core, known as the Zend Engine 1.0. PHP 4 featured improved speed and reliability over PHP 3. In terms of features, PHP 4 added references, the Boolean type, COM support on Windows, output buffering, many new array functions, expanded object-oriented programming, inclusion of the PCRE library, and more.
PHP 5 was released in July 2004, with the updated Zend Engine.
“”” (

PHP 5 was released … after long development and several pre-releases. It is mainly driven by its core, the Zend Engine 2.0 with a new object model and dozens of other new features.

“PHP’s development team includes dozens of developers, as well as dozens of others working on PHP-related and supporting projects, such as PEAR, PECL, and documentation, and an underlying network infrastructure of well over one-hundred individual web servers on six of the seven continents of the world. Though only an estimate based upon statistics from previous years, it is safe to presume PHP is now installed on tens or even perhaps hundreds of millions of domains around the world” (


A short-lived version of PHP to do with technical changes the language needed.


During 2014 and 2015, a new major PHP version was developed, which was numbered PHP 7. The numbering of this version involved some debate among internal developers. While the PHP 6 Unicode experiment had never been released, several articles and book titles referenced the PHP 6 name, which might have caused confusion if a new release were to reuse the name. After a vote, the name PHP 7 was chosen.

The foundation of PHP 7 is a PHP branch that was originally dubbed PHP next generation (phpng). It was authored by Dmitry Stogov, Xinchen Hui and Nikita Popov, and aimed to optimize PHP performance by refactoring the Zend Engine while retaining near-complete language compatibility. By 14 July 2014, WordPress-based benchmarks, which served as the main benchmark suite for the phpng project, showed an almost 100% increase in performance. Changes from phpng are also expected to make it easier to improve performance in the future, as more compact data structures and other changes are seen as better suited for a successful migration to a just-in-time (JIT) compiler. Because of the significant changes, the reworked Zend Engine is called Zend Engine 3, succeeding Zend Engine 2 used in PHP 5.


PHP 8 was released on November 26, 2020. PHP 8 is a major version and has breaking changes from previous versions. New features and notable changes include:

  • Just-in-time compilation
  • Addition of the match expression
  • Type changes and additions
  • Syntax changes and additions
  • Standard library changes and additions (for example WeakMap)


Laravel is a free, open-source PHP web framework (a software framework that is designed to support the development of web applications including web services, web resources, and web APIs.), created by Taylor Otwell and intended for the development of web applications following the model–view–controller (MVC) architectural pattern and based on Symfony. Some of the features of Laravel are a modular packaging system with a dedicated dependency manager, different ways for accessing relational databases, utilities that aid in application deployment and maintenance, and its orientation toward syntactic sugar.

The source code of Laravel is hosted on GitHub and licensed under the terms of MIT License.
“”” (Wikipedia).


JAVA (1991 / 1995)

Key People
Travelling back in time to the early 90s, at a time when the Hubble Telescope was Launched, the first President Bush was in office, and the Cold War was appearing to come to an end as the USSR broke up. We come to a time in June 1991, playing on the radio was Amy Grant with Baby Baby (#popclassic), and a bunch of guys working at Sun Microsystems, James Gosling, Mike Sheridan, and Patrick Naughton started to work on a new team, which was known as “the Green Team“, a type of R&D focused unit looking at emerging technologies, which eventually decided to work on developing a new type of device which was called the “*7 or Star 7”.
Image result for sun microsystems original logoStar7

In the summer of ’92 they presented us with the new handheld home-entertainment controller device they had come up with, which featured a touchscreen User Interface. Along with the device had come a programming language; the language was created by Green Team member James Gosling specifically for *7, and it was called “Oak” since he had an oak tree outside his office window when he worked on the 4th floor of a building located at 2180 Sand Hill Road, Menolo Park, CA, close to Stanford University where SUN began (hence the name).

Eventually, the language would become known as Java after a bunch of other random names like Zygote and Silk had been discounted. There are various stories about exactly who came up with the name, although it seems like it was somehow connected to Java coffee at some point, hence the current logo. Again the idea of using the language on the device was based around having it communicate with other devices.

Java developed into a general-purpose, object-oriented language, it’s known for being fast and efficient, and … It’s a write-once-run-anywhere model or platform independence. It also supports multi-threading or so-called concurrent programming. The syntax of Java is quite similar to C, C++, and C#, but very different from Objective-C or Swift. Java was designed to be stable and simple, and easy to use, and as such, it attempts to manage memory for the programming through garbage-collection (just as Objective-C and Swift now use ARC).

Java has a library of stuff, it’s API, that has a lot of pre-written functionality, and thus like many other programming languages, it allows one to “stand on the shoulders of giants” as it were, or basically not to have to bother re-writing a whole bunch of stuff that good quality solutions already exist for (for comparison this is quite similar to the Cocoa framework in Objective-C or Swift).

Initially released by Sun Microsystems in 1995, Java is a general-purpose programming language that was designed with the specific goal of allowing developers to “write once, run anywhere.” Java applications are compiled into bytecode that can run on implementations of the Java Virtual Machine (JVM). Like CLI, JVM helps bridge the gap between source code and the 1s and 0s that the computer understands.

The goal is Java as it is now about platform independence through the JVM and the sort of intermediate stage of byte-code which the java compiler compiles into instead of fully machine code. This confers several advantages, one of which is that because different processors (Intel, AMD etc) have different chipsets and thus different control sets, this approach allows the Java program to take advantage of these architectures by allowing the JVM to adapt the bytecode to the given system it’s running on…

Webrunner  => HotJava


Dynamic ==>> HTML + Programs = Applets   (platform independent)

Being free meant Java eventually become popular for a whole range of applications like mobile apps, large web apps, and desktop application.

Netscape Navigator shipped with Java Applet support
“A Java applet was a small application that is written in the Java programming language, or another programming language that compiles to Java bytecode, and delivered to users in the form of Java bytecode. The user launched the Java applet from a web page, and the applet was then executed within a Java virtual machine (JVM) in a process separate from the web browser itself. A Java applet could appear in a frame of the web page, a new application window, Sun’s AppletViewer, or a stand-alone tool for testing applets” (Source:
There are a whole load of Open Source libraries written in Java used by large companies

Example Java Code

Example with Dependency Injection

As with other languages, we can follow good principles and use dependency injection over static utilities or singletons:

(Ref: #A)


A: Bloch, J. (2018). Effective Java – Third Edition. Pearson Education Inc, New York City.

back to index

JavaScript / JS (~1995)

JavaScript “is a high-levelinterpreted programming language. It is a language which is also characterized as dynamicweakly typedprototype-based and multi-paradigm”. JS is a scripting language and is totally separate language to Java which is not a scripting language (although the two have shared some syntactical elements).

JavaScript can be used as client-side as well as server-side scripting language, it is inserted into HTML pages and is understood can thus by web browsers.

Alongside HTML and CSS, JavaScript is one of the three core technologies of the World Wide Web. It is used to make dynamic webpages interactive and provide online programs, including video games. The majority of websites employ it, and all modern web browsers support it without the need for plug-ins by means of a built-in JavaScript engine. Each of the many JavaScript engines represents a different implementation of JavaScript, all based on the ECMAScript scripting language specification standard, with some engines not supporting the spec fully, and with many engines supporting additional features beyond ECMA (Wikipedia).

Key People
Netscape logo.svg

Brendan Eich   &&  Marc Andreessen

Image result for brendan eichFile:Marc Andreessen.jpg
“Although it was developed under the name Mocha, the language was officially called LiveScript when it first shipped in beta releases of Netscape Navigator 2.0 in September 1995, but it was renamed JavaScript when it was deployed in the Netscape Navigator 2.0 beta 3 in December. The final choice of name caused confusion, giving the impression that the language was a spin-off of the Java programming language, and the choice has been characterized as a marketing ploy by Netscape to give JavaScript the cachet of what was then the hot new Web programming language” (Wikipedia)

Code Example



back to index

Versions of the ECMAScript standard

“While both JavaScript and JScript aim to be compatible with ECMAScript, they also provide additional features not described in the ECMA specifications”

ECMAScript 1 (1997) – The first version

ECMAScript 2 (1998) – no major revisions
ECMAScript 3 (1999) – added support for regex (regular expressions), and try/catch
ECMAScript 4 – no version of this was ever released widely
ECMAScript 5 (2009) – Numerous changes:
ECMAScript 6 (ES6)  / ECMAScript 2015
ECMAScript 2016 (ES2016) / ES7
ECMAScript 2017 (ES2017)
ECMAScript 2018 (ES2018)

The HTML DOM (Document Object Model)

When a web page is loaded, the browser creates a Document Object Model of the page.

The HTML DOM model is constructed as a tree of Objects:

The HTML DOM Tree of Objects


With the object model, JavaScript gets all the power it needs to create dynamic HTML:

  • JavaScript can change all the HTML elements in the page
  • JavaScript can change all the HTML attributes in the page
  • JavaScript can change all the CSS styles in the page
  • JavaScript can remove existing HTML elements and attributes
  • JavaScript can add new HTML elements and attributes
  • JavaScript can react to all existing HTML events in the page
  • JavaScript can create new HTML events in the page

The HTML DOM is a standard for how to get, change, add, or delete HTML elements.

Javascript Based Frameworks


Vue.js (commonly referred to as Vue; pronounced like “view”) is an open-source model–view–viewmodel front end JavaScript framework for building user interfaces and single-page applications. It was created by Evan You and is maintained by him and the rest of the active core team members.
Source: Wikipedia




“Angular is a development platform, built on TypeScript. As a platform, Angular includes:

A component-based framework for building scalable web applications
A collection of well-integrated libraries that cover a wide variety of features, including routing, forms management, client-server communication, and more
A suite of developer tools to help you develop, build, test, and update your code”.


Angular Examples

React & React Native (js related)

Developed at Facebook as a better way of making the Facebook websites work well, this evolved into a more widely used system and a way of making native apps on both Android and iOS from a common base very much making use of a kind of Javascript with scripting tags.

back to index

Typescript (js related)

Typescript is a modern Javascript development language, statically compiled, to facilitate safer cleaner code. It can be run on Node js or any browser which supports ECMAScript 3 or newer versions.
“Typescript provides optional static typing, classes, and interface. For a large JavaScript project adopting Typescript can bring you more robust software and easily deployable with a regular JavaScript application”(

Example Code
What we have created in the below example is done by taking a normal bit of javascript but add type annotations. This means we’ll get a compiler error if we try and call our greeter function by passing in a parameter that is not of type string. It’s this type safety that Typescript tries to bring to JavaScript.

back to index

OCaml (~1996)

OCaml is a member of the ML language family – This means is derives from a language called classic ML which is a functional programming language which was developed by Robin Milner and others at the Edinburgh Laboratory for Computer Science in Scotland. The purpose of ML was to create a language that was a better “theorem prover” than Lisp.

“In the early ’80s, there was a schism in the ML community with the French on one side and the British and US on another. The French went on to develop CAML and later Objective CAML (OCaml) while the Brits and Americans developed Standard ML. The two dialects are quite similar. Microsoft introduced its own variant of OCaml called F# in 2005″(Ref#: B).

“OCaml (/oʊˈkæməl/ oh-KAM-əl) (formerly Objective Caml) is the main implementation of the Caml programming language, created in 1996 by Xavier Leroy, Jérôme Vouillon, Damien Doligez, Didier Rémy, Ascánder Suárez, and others. It extends Caml with object-oriented features, and is a member of the ML family” (Wikipedia)

Code Examples

Higher-Order Functions

Recursive Functions
In order to use a recursive function in OCaml, you need to use the keywords let rec as functions are not recursive unless you explicitly define them as such.


“Abstraction, also known as information hiding, is fundamental to computer science. When faced with creating and maintaining a complex system, the
interactions of different components can be simplified by hiding the details of each component’s implementation from the rest of the system.
Details of a component’s implementation are hidden by protecting it with an interface. An interface describes the information which is exposed to other components in the system. Abstraction is maintained by ensuring that the rest of the system is invariant to changes of implementation that do not affect the interface.”
“The most powerful form of abstraction in OCaml is achieved using the module system. The module system is basically its own language within OCaml, consisting of modules and module types. All OCaml definitions (e.g. values, types, exceptions, classes) live within modules, so the module system’s support for abstraction includes support for abstraction of any OCaml definition”(Ref#: C).

“The module IntSet uses lists of integers to represent sets of integers. This is indicated by the inclusion of a type t defined as an alias to int list. The implementation provides the basic operations of sets as a collection of functions that operate on these int lists.
The components of a structure are accessed using the . operator. For example, the following creates a set containing 1, 2 and 3”.



back to index

Scala (2001 /2004)

“The design of Scala started in 2001 at the École Polytechnique Fédérale de Lausanne (EPFL) (in Lausanne, Switzerland).” and “It followed on from work on Funnel, a programming language combining ideas from functional programming and Petri nets. Odersky formerly worked on Generic Java, and javac, Sun’s Java compiler”.
After an internal release in late 2003, Scala was released publicly in early 2004 on the Java platform, [and a] second version (v2.0) followed in March 2006 (Source: Wikipedia).

Key People

Martin Odersky
Mark Odersky photo by Linda Poeng.jpg
It was Odersky who developed Scala out of an original set of work he did on the programming language Funnel. “Funnel led to Scala, whose design began in 2001, and which was first released in 2003. Scala is not an extension of Java, but it is completely interoperable with it. Scala translates to Java bytecodes, and the efficiency of its compiled programs usually equals Java’s. A second implementation of Scala compiles to .NET. (this version is currently out of date, however)”(Ref#: A).
“Scala was designed to be both object-oriented and functional. It is a pure object-oriented language in the sense that every value is an object. Objects are defined by classes, which can be composed using mixin composition. Scala is also a functional language in the sense that every function is a value. Functions can be nested, and they can operate on data using pattern matching” (Ref#: A).
Martin has actually described Scala as “the Java of the future”(Ref#: B). Despite this, as well as adding thing like closures, Scala also appears to take some things away.

(Source: Ref#: B)


Scala is a general-purpose programming language providing support for functional programming and a strong static type system. Designed to be concise, many of Scala’s design decisions aimed to address criticisms of Java (Source: Wikipedia). For example, Scala is designed to be scalable insofar as it uses strong typing, inference, and light boilerplate. Furthermore, it provides a tight integration of Object-Oriented programming and functional programming into one language (Ref#:B).

Code Examples

A Scala code snippet could look like this:



back to index

Kotlin (2011/2016)

Kotlin ( is a statically typed programming language that runs on the JVM (see the section on the Java programming language for more on this). It may also be compiled to JavaScript source code or use the LLVM compiler infrastructure (Ref#: A)
Image result for jetbrains
Kotlin came from the JetBrains team out of Saint Petersburg in Russia, and it was first publicized in 2011. That year Lady Gaga’s eponymous hit The Edge of Glory was released, and other minor stuff also happened like the Egyptian Revolution, and Bin Laden getting killed. It so happened, as we now know, that Kotlin was itself on the “Edge of Glory”, as we shall now find out:

It wasn’t until 2012 that the project became open source, and not until February 15, 2016, that Kotlin v1.0, the “the first officially stable release” came about.

The language was developed with the goal of addressing some of the issues with Java, which was the most popular programming language in use in various contexts, particularly in Android development at that time.

One of the main strengths of Kotlin is its ability to interoperate with Java code, making it an easy choice for developers who are already familiar with Java but want to take advantage of the new features offered by Kotlin. This compatibility allows developers to use Kotlin in existing Java projects, gradually adopting its features over time.

Kotlin has a compact and expressive syntax, making it easier to write and maintain code compared to Java. Additionally, it offers null safety, extension functions, and higher-order functions, which make it easier to write concise and expressive code. Another advantage is that it has better type inference, reducing the amount of boilerplate code that developers need to write.

Kotlin is also designed to be more concise, reducing the amount of code needed to implement a feature, while still being expressive and readable. This can lead to improved productivity, as well as reduced bugs and technical debt.

Kotlin has become increasingly popular in recent years, especially after Google announced its official support for the language for Android development in 2017. Since then, Kotlin has seen widespread adoption, and many companies, including Amazon, Atlassian, and Netflix, are using it for their development needs.

Saint Petersburg Photo

Key People

Dmitry Jemerov

Fmitry Jemerov Photo

Kotlin Code Examples


Kotlin is now taking over the Android app development scene by offering a number of benefits over Java, including better type inference, null safety, and more concise syntax, making it a compelling choice for developers, especially those working on Android development.

D: Introduction to Kotlin (Google I/O ’17).

back to index

The “R” Programming Language (95 / 2000)

“R is a programming language and free software environment for statistical computing and graphics that is supported by the R Foundation for Statistical Computing. The R language is widely used among statisticians and data miners for developing statistical software and data analysis. Polls, surveys of data miners, and studies of scholarly literature databases show that R’s popularity has increased substantially in recent years. As of August 2018, R ranks 18th in the TIOBE index, a measure of [the] popularity of programming languages” (Wikipedia).

In the world of Machine Learning, we often will use R alongside Python to develop learning models and ways of implementing statistically-based Machine Learning stuff.
R Code Examples


R’s roots are in the language “S” which was created in the late 70s. However, the first version of R was released in 1995 and the first stable version was not released until 2000.
On a side-note recently learnt a smidgen of R by reading and going through the book Machine Learning for Dummies and I definitely plan to continue this and properly learn it real world

back to index

C# / “C Sharp” (1999)


Key People

Back in 1999, Anders Hejlsberg began a team within Microsoft, working on the .NET Framework development project, to build an Objected-Oriented, C-based language which was initially called “COOL” (yes really).  Subsequently renamed C# for trademark reasons, it began to be publicized by Microsoft as early as 2000.

Image result for anders hejlsberg

For Anders, it was the downsides of other major programming languages that he wanted to address in C#. Essentially the goal of C# internally within Microsoft was to create a rival programming language to Java, or indeed, also as a kind of replacement for J++.

J++ was a programming language developed by Microsoft in the mid-1990s as an extension of the Java programming language. J++ was designed to make it easier for Windows developers to create Java applications and applets that utilized Microsoft’s user interface and other Windows-specific features. J++ added several features to Java, including support for the Windows Component Object Model (COM), the ability to access ActiveX controls, and improved support for graphical user interface (GUI) development. J++ was bundled with Microsoft’s Visual J++ development environment, which allowed developers to create, debug, and test Java applications using a visual development environment. However, J++ faced opposition from Sun Microsystems, the creators of Java, who viewed Microsoft’s extensions to the language as a violation of the Java licensing agreement. The two companies eventually reached a settlement, and Microsoft stopped development of J++ in favor of other technologies such as C# and the .NET framework (

The developers thought about “generics” support from the start of the development process, although this didn’t get implemented and released ’till later on.

So for C#, work started with the development of the Common Language Runtime (CLR) element of the .Net Framework (this is the virtual machine component of Microsoft . NET Framework – with a similar role to the JVM in Java-based languages), and ideas developed as part of this work evolved into ideas behind the language design of C#.

C# source code gets compiled into bytecode which runs on implementations of the Common Language Infrastructure (CLI).
C# is considered to be kinda similar to Java, with some controversy around this. However, in recent years the languages have begun to diverge more and more, particularly with regards to the design decisions they have taken when implementing things like generics.

Advantages of C#

Automatic Garbage Collection
No Problem if Memory Leak (strong memory backup)
Easy-to-Develop with
Cross-Platform (cross-platform support)
Better Integration
More Legible Coding
Scarcity of Choice
Programming Support
Backward Compatibility
Lambda and Generics support
Backed by Microsoft
Language Integrated Query (LINQ)
Easy extension methods
Properties with getting/set methods
Automated memory management


Example C# Code

Null Coalescing Operator

Using the nullcoalescing operator (??) allows you to specify a default value for a nullable type if the left-hand operand is null (similar perhaps to what we see with Swift’s nil-coalescing operator)



String Manipulation

Hashing Functions

Source: C# for Professionals


Note the following about interfaces:

They are usually prefixed with the letter “I”

An interface is a completely “abstract class“, which can only contain abstract methods and properties (with empty bodies).

They indeed can work similarly to things that we call “protocols” in other languages such as Swift.


“LINQ introduces the standard and unified concept of querying various types of data sources falling in the range of relational databases, XML documents, and even in-memory data structures. LINQ supports all these types of data stores using LINQ query expressions of first-class language constructs in C#”. “We can easily retrieve data from any object that implements the IEnumerable<T> interface”.

A simple example in a particular context


Generic means the general form, not specific (so it can mean “types to-be-specified-later”). So this can mean code is written without reference to a particular data type. We see this feature in a range of different programming languages, now including C#.

Generic Class

Generic Method in a Non-Generic Class


back to index

Visual Basic .NET / VB.NET (~2001/2)

“Visual Basic .NET (VB.NET) is a multi-paradigm, object-oriented programming language, implemented on the .NET Framework. Microsoft launched VB.NET in 2002 as the successor to its original Visual Basic language”(Ref#: A).

The .NET framework is a software framework developed by Microsoft that runs primarily on Microsoft Windows. It provides a platform for building, deploying, and running a variety of applications, including web applications, desktop applications, and mobile apps. The .NET framework was first introduced in 2002 and has since undergone several major releases. It provides a common set of libraries and runtime environments for building and running applications written in different programming languages, including C#, F#, and Visual Basic .NET. The .NET framework is designed to be highly interoperable and to allow for easy integration with other technologies. It also provides security features, such as code access security and secure communication, to help ensure the safety and reliability of applications built on the .NET framework.

Visual Basic .NET (VB.NET) is a version of the popular Visual Basic programming language* that was designed to work with the .NET framework, which is a software development platform from Microsoft. The first version of VB.NET was released in 2002 as a part of Microsoft’s Visual Studio .NET development environment. The language was updated several times to include new features and improve its performance. VB.NET was created to provide a more modern and robust programming language that could be used to build a wide range of applications, from desktop software to web applications. Over the years, VB.NET has become a popular choice for developers due to its ease of use, versatility, and compatibility with the .NET framework.

Visual Basic is a high-level, event-driven programming language that was first introduced by Microsoft in 1991. It was designed to make it easier for non-programmers to create graphical user interface (GUI) applications, and it quickly became one of the most popular programming languages in the world. Visual Basic combined the simplicity of Basic with the power of Windows, making it easy for people to create desktop applications that could interact with the Windows operating system. Over the years, Visual Basic has evolved, with new versions being released to support the latest technology and programming paradigms. Today, Visual Basic is still widely used for developing Windows desktop applications, as well as for developing web applications using the .NET framework.


Code Examples





F# (2005)

“F# (pronounced F sharp) is a general-purpose, strongly typed, multi-paradigm programming language that encompasses functional, imperative, and object-oriented programming methods. F# is most often used as a cross-platform Common Language Infrastructure (CLI) language, but it can also generate JavaScript[9] and graphics processing unit (GPU) code.

The syntax of F# is different from C-style languages:

  • Curly braces are not used to delimit blocks of code. Instead, indentation is used (like Python).
  • Whitespace is used to separate parameters rather than commas.

F# was developed by the F# Software Foundation, Microsoft and open contributors. An open-source, cross-platform compiler for F# is available from the F# Software Foundation. F# is also a fully supported language in Visual Studio and Xamarin Studio. Other tools supporting F# development include Mono, MonoDevelop, SharpDevelop, MBrace, and WebSharper. Plug-ins supporting F# exists for many widely used editors, most notably the Ionide extension for Atom and Visual Studio Code, and integrations for other editors such as Vim, Emacs, Sublime Text, and Rider.

F# is a member of the ML language family and originated as a .NET Framework implementation of a core of the programming language OCaml. It has also been influenced by C#, Python, Haskell, Scala, and Erlang” (Ref#: A).

“The F# research project sought to create a variant of the OCaml language running on top of the .Net platform. A related project at MSR Cambridge, SML.NET, did the same for Standard ML. (OCaml and Standard ML are both variants of the ML language)” (Ref#: B)

As such the language sits as part of the general family of primarily functional languages including:

  • Standard ML
  • OCaml
  • Haskell
  • Clojure






Swift (~2010)

Let’s journey back to 2010 – popular music in the charts heard at that time included “California Gurls” by Katy Perry and “Dynamite” by “Taio Cruz”. At the time, in the Appleverse, it was the occasion of WWDC 2010 where Apple launched “C++ support in Clang”. It was following these events that the story of Swift was to begin.

Key People
Swift is a language that was developed in house by Apple – its story started in July of 2010 when Chris Lattner decided to start a project to create a new programming language, at the time his job was as “Senior Manager and Architect, Low-Level Tools” at Apple, where he was also instrumental in the development of ARC.
Image result for apple logo 2010

Aside on Chris Latner’s background and LLWM
Lattner studied computer science at the University of Portland, Oregon, graduating in 2000. While in Oregon, he worked as an operating system developer, enhancing Sequent Computer Systems’s DYNIX/ptx.

“In late 2000, Lattner joined the University of Illinois at Urbana-Champaign as a research assistant and M.Sc. student. While working with Vikram Adve, he designed and began implementing LLVM, an innovative infrastructure for optimizing compilers, which was the subject of his 2002 M.Sc. thesis. He completed his Ph.D. in 2005, researching new techniques for optimizing pointer-intensive programs and adding them to LLVM.

In 2005, Apple Inc. hired Lattner to begin work bringing LLVM to production quality for use in Apple products. Over time, Lattner built out the technology, personally implementing many major new features in LLVM, formed and built a team of LLVM developers at Apple, started the Clang project, took responsibility for evolving Objective-C (contributing to the blocks language feature, and driving the ARC and Objective-C literals features), and nurtured the open source community (leading it through many open source releases). Apple first shipped LLVM-based technology in the 10.5 (and 10.4.8) OpenGL stack as a just-in-time (JIT) compiler, shipped the llvm-gcc compiler in the integrated development environment (IDE) Xcode 3.1, Clang 1.0 in Xcode 3.2, Clang 2.0 (with C++ support) in Xcode 4.0, and LLDB, libc++, assemblers, and disassembler technology in later releases.

The name LLVM was originally an acronym for Low Level Virtual Machine. This acronym has officially been removed to avoid confusion, as the LLVM has evolved into an umbrella project that has little relationship to what most current developers think of as virtual machines. Now, LLVM is a brand that applies to the LLVM umbrella project, the LLVM intermediate representation (IR), the LLVM debugger, the LLVM implementation of the C++ Standard Library (with full support of C++11 and C++14), etc. LLVM is administered by the LLVM Foundation. Its president is compiler engineer Tanya Lattner.
“For designing and implementing LLVM”, the Association for Computing Machinery presented Vikram Adve, Chris Lattner, and Evan Cheng with the 2012 ACM Software System Award.”(Wikipedia).

“If one has to choose a modern-day “winner” for Best Intermediate Code, among the 40 entries on the Wikipedia “Bytecodes” page, then LLVM IR would be a very strong contender. The sheer number of organisations that have adopted it…”(Prof. Brailsford).
From conversations and white-board sessions between Chris and Bertrand Serlet, there then evolved the idea of Swift, which was originally called “Shiny”, to be a new a shiny programming language with improvements over Objective-C and C++. Lattner felt that “you can’t retrofit memory safety into Objective-C without removing the C”(S-Ref#: C), and thus part of the goals for Swift were to be solving memory safety.
He says that it was he who implemented “much of the basic language structure” but that much of the rest of the development then became a big team effort at Apple (S-Ref#: A).

When Chris was thinking about the design of the language he looked to lots of lessons learned from “Objective-C, Rust (Graydon Hoare and Mozilla), Haskell, Ruby (Yukihiro “Matz” Matsumoto), Python, C#, CLU (B. Liskov)” and a lot of other sources.

Indeed, Chris is still very active in contributing to Swift and has written about how he sees Swift evolving with regards to Concurrency and the likes of async/await to address a perceived issue with complex completion handlers syntax, and adding then async/await for a possible Swift 6 (

It wasn’t really until 2014 that the development of the language reached the stage where Apple began to publicize it externally and release it to a selected group of developers. Indeed Swift reached milestone 1.0 in early September 2014, with the Gold Master of Xcode 6.0.

Example Swift Code


Swift is now the most widely used language in iPhone development despite the enduring popularity of Objective-C amongst large companies. It provides a relatively easy access point to new programmers looking to make their own apps, as well as all the functionality needed to implement large and complex apps for major companies.

back to index


General and Historical
A: COMP6411 Comparative Study of Programming Languages. Retrieved from: on 4th May 2018

B: Giloi, W.K. (1997). Konrad Zuse’s Plankalkül: the first high-level, “non von Neumann” programming language. IEEE Annals of the History of Computing, 19:2.

C: Bauer, F.L., & Wössner, H. (1972). The “Plankalkul” of Konrad Zuse: A Forerunner of Today’s Programming Languages. Communications of the ACM, 5:7.
C: Jones, C. (). The Technical and Social History of Software Engineering.
D: Objective-C. Sourced from:
For Swift
D: The UNCOL Problem – Computerphile retrieved from: on Sept 15th 2019.
For C++





Last modified: April 20, 2024