** **

The chemical machine paradigm can be understood as a modification of the Actor model where the "chemical actors" are type-safe and are automatically managed by the runtime system.

I show that all the features of the chemical machine logically follow from this single requirement, together with the idea of waiting for messages from multiple mailboxes at once. In particular, it is necessary that the actors become stateless (all data resides on messages); that mailboxes become unordered multisets of messages, rather than ordered queues; and that the parallelism of the computations become an automatic feature provided by the runtime, rather than being managed explicitly by the programmer.

In this way, the chemical machine paradigm improves upon the Actor model, freeing the programmer from the need to manage the actors' lifecycles and mutable states, and making the code more purely functional and more declarative. The programmer can design a chemical machine program by reasoning directly in terms of data that resides on messages in mailboxes, rather than having to handle multiple running processes and their mutable states, as it is still necessary in the Actor model.

I also show how programs written in the Actor model can be directly and straightforwardly simulated by chemical machine programs.

This tutorial is based on a chapter from "Concurrency in Reactions", a tutorial book on `Chymyst` and Chemical Machine programming: https://winitzki.gitbooks.io/concurrency-in-reactions-declarative-multicore-in/content/

Main repository for the `Chymyst` library: https://github.com/Chymyst/chymyst-core/

This short summary gives an overview of the material presented in detail in the corresponding chapter of the functional programming tutorial.

The summary is designed to help you decide whether you want to spend time on the full tutorial, and also to refresh the main points of the tutorial after you already listened to it.

The full recording of chapter 9 is here:

https://www.youtube.com/watch?v=OnaHcVBbCGQ

Functional programming in the mathematical spirit.

Long and difficult, yet boring explanations given in excruciating detail.

Start by reading the slides, go through the worked examples and exercises. Watch the video when you cannot fully understand something in the slides.

Chapter 9: Traversable functors

Motivation for the `traverse` operation

Deriving the `sequence` operation to simplify `traverse`

How to implement `sequence` for any polynomial functor

Motivation for laws of `traverse` from category theory

Deriving the laws for `sequence`

Constructions for traversable functors and bitraversable bifunctors

Deriving `foldMap` by specializing to a constant applicative functor

Reasons to introduce the `Foldable` type class

Implementing `foldLeft` via `foldMap`

Traversable contrafunctors and profunctors exist, but are not useful

Examples of using traversable and foldable: decorating trees

Implementing `scanLeft` as a traversal with respect to a State monad

Why breadth-first tree traversal or depth level computation cannot be represented via a traversal with a State monad

Naturality with respect to an applicative functor

Exercises

Slides: https://github.com/winitzki/talks/blob/master/ftt-fp/09-traversables.pdf

Code examples: https://github.com/winitzki/scala-examples/tree/master/chapter09/src

How to make callback APIs composable using the continuation monad

Examples with synchronous and asynchronous APIs

Example: converting Java NIO2 to continuation monad

How to use the continuation monad to handle extra information after callbacks are called

Using the monad (A ⇒ Future[R]) ⇒ Future[R] to handle asynchronous callbacks with error recovery (the "transaction monad"), not described in detail in this talk, see sample code

What is a "monadic DSL", what are its costs and benefits

Difference between using Scala Futures and using a monadic DSL

Commented code for the presentation: https://github.com/winitzki/scala-examples/blob/master/chapter07/src/test/scala/example/ContinuationMonadPresentation.scala

Commended code for the "transaction monad": https://github.com/winitzki/scala-examples/blob/master/chapter08/src/test/scala/example/TransactionMonad.scala

This short summary gives an overview of the material presented in detail in the corresponding chapter of the functional programming tutorial.

The summary is designed to help you decide whether you want to spend time on the full tutorial, and also to refresh the main points of the tutorial after you already listened to it.

The full recording of chapter 8, part 2 is split into three portions:

(1 of 3) https://www.youtube.com/watch?v=xBDkBriX7Uk

Functional programming in the mathematical spirit.

Long and difficult, yet boring explanations given in excruciating detail.

Start by reading the slides, go through the worked examples and exercises. Watch the video when you cannot fully understand something in the slides.

Chapter 8: Applicative functors and profunctors. Part 2: Laws and structure

Portion 3 of 3 covers slides 17 to 22.

Implementing and checking the laws for applicative contrafunctor constructions

Why all exponential-polynomial contrafunctors with monoid coefficients are applicative

Definition and properties of profunctors

Why all exponential-polynomial type constructors are profunctors

Example of a non-profunctor type constructor: a partial type function

Definition and laws of applicative profunctors

Constructions of applicative profunctors; verifying the laws

Commutative applicative functors, their interpretation, and examples

A unified category theory-based picture of "standard" functor classes (functor, contrafunctor, filterable, monad, applicative)

How to use this picture to find laws and to discover new type classes, such as the comonads

Exercises

Slides: https://github.com/winitzki/talks/blob/master/ftt-fp/08-applicatives-part2.pdf

Code examples: https://github.com/winitzki/scala-examples/tree/master/chapter08/src

Functional programming in the mathematical spirit.

Long and difficult, yet boring explanations given in excruciating detail.

Start by reading the slides, go through the worked examples and exercises. Watch the video when you cannot fully understand something in the slides.

Chapter 8: Applicative functors and profunctors. Part 2: Laws and structure

Portion 2 of 3 covers slides 15 to 17.

Implementing and checking the laws for applicative functor constructions

Examples of applicative functors that disagree with their monad instances

Examples of non-applicative functors

Monoid constructions: why all non-parameterized types are monoids

Why all polynomial functors with monoid coefficients are applicative

Definition and laws of applicative contrafunctors

How applicative contrafunctors are different from applicative functors

Slides: https://github.com/winitzki/talks/blob/master/ftt-fp/08-applicatives-part2.pdf

Code examples: https://github.com/winitzki/scala-examples/tree/master/chapter08/src

I show how to implement a simple multi-threaded (not distributed) consensus algorithm in the Chemical Machine, using the Chymyst library in Scala.

I derive the implementation by reasoning about the necessary molecules and data to be handled.

I then introduce a simple error that seems to be made frequently in chemical machine programming. I show how to debug the contents of the reaction site in order to diagnose and fix the error. I also show how to enable various log levels and trace the scheduling of reactions as it is performed by the chemical machine runtime.

Code for this presentation: https://github.com/Chymyst/Chymyst/blob/master/src/test/scala/io/chymyst/lab/ChymystConsensus.scala

See also "Concurrency in Reactions", a tutorial book on `Chymyst` and Chemical Machine programming: https://winitzki.gitbooks.io/concurrency-in-reactions-declarative-multicore-in/content/

Main repository for the `Chymyst` library: https://github.com/Chymyst/chymyst-core/

I show how use the Chemical Machine, implemented by the Chymyst library in Scala, to create concurrency patterns that depend on parameters known at run time, rather than defined statically.

Two main examples are described here:

- the "dining philosophers" problem with `n` philosophers

- a parallel simulation of Conway's "game of life" on a wrap-around rectangular board

In the code, the necessary molecules and reactions are first defined and stored in arrays, and only later activated. This allows us to implement a concurrent application that requires an _a priori_ unknown number of molecules and reactions.

I start from an implementation of "dining philosophers" with a statically defined number (5) of philosophers, to an implementation where the number of philosophers is given at run time.

I implement the "game of life" using a single reaction for all cells on the entire board for all time steps; using a single reaction per time step; and using a single reaction per cell.

I show that the performance of the program is greatly improved if we reorganize the "chemistry" to avoid certain performance-killing anti-patterns:

- defining a large number of quick reactions in a single reaction site; this prevents the reactions from being scheduled concurrently

- defining a reaction with a cross-molecule guard condition, while emitting a large number of molecules for which that condition will have to be evaluated; this leads to a very slow scheduling of each reaction

The best performance for the "game of life" simulation is achieved when using a separate, single-reaction reaction site for each cell on the board and for each time step.

Code for this presentation: https://github.com/Chymyst/Chymyst/blob/master/src/test/scala/io/chymyst/lab/ChymystDinPhil.scala

https://github.com/Chymyst/chymyst-core/blob/master/core/src/test/scala/io/chymyst/test/DiningPhilosophersSpec.scala

https://github.com/Chymyst/chymyst-core/blob/master/core/src/test/scala/io/chymyst/test/DiningPhilo..

I show how to implement the recursive fork/join pattern in the Chemical Machine, using the Chymyst library, and compare this implementation with a standard Scala Future-based code.

I show two Chemical Machine recursive fork/join implementations, which are concise and very similar in their approach.

The first implementation is derived from straightforward reasoning about usage of molecules for handling concurrent data. However, this implementation creates a large number of new reaction sites (one per fork branch point), which leads to slow performance since the Chymyst library incurs significant overhead when creating a new reaction site. Nevertheless, this example illustrates how Chemical Machine code can declare new molecules and reaction sites within a reaction, which results in a recursively defined concurrent logic.

The second implementation of fork/join is based on the idea of replacing one of the molecules with a _continuation_, a function having the same type as the molecule emitter call. In this way, we avoid having to declare new reaction sites, and also avoid thread-switching: all reactions are scheduled through a single molecule within a single reaction site, and the "join" stage is executed on the same thread as the last of the "forked" subtasks. This implementation turns out to be significantly faster than the Scala Future-based benchmark.

Code for this presentation: https://github.com/Chymyst/Chymyst/blob/master/src/test/scala/io/chymyst/lab/ChymystForkJoin.scala

See also "Concurrency in Reactions", a tutorial book on `Chymyst` and Chemical Machine programming: https://winitzki.gitbooks.io/concurrency-in-reactions-declarative-multicore-in/content/

Main repository for the `Chymyst` library: https://github.com/Chymyst/chymyst-core/

I show how to implement the broadcast publish/subscribe pattern in the Chemical Machine, using the Chymyst library.

I derive the code by reasoning about what molecules need to be introduced to implement the desired functionality.

The resulting code is declarative - it is just a few lines of reactions, and these reactions directly describe what the publish/subscribe functionality must do.

The test creates a publisher and two subscribers, and shows that we can subscribe, unsubscribe, and stop the publisher concurrently, at chosen times.

Code for this presentation: https://github.com/Chymyst/Chymyst/blob/master/src/test/scala/io/chymyst/lab/ChymystPubSub.scala

See also "Concurrency in Reactions", a tutorial book on `Chymyst` and Chemical Machine programming: https://winitzki.gitbooks.io/concurrency-in-reactions-declarative-multicore-in/content/

Main repository for the `Chymyst` library: https://github.com/Chymyst/chymyst-core/

This tutorial shows how to use the Chemical Machine to implement all the concurrency primitives of the Go language: the "goroutines", the "channels", and the "select" construction.

The main similarity between the Chemical Machine paradigm and the CSP paradigm (as implemented in the Go language) are that channel "send" action is somewhat similar to emitting a molecule. When a message is "sent on a channel", it waits until someone "receives" it. This is similar to how an emitted molecule remains at the reaction site until some reaction consumes it.

At that point, similarities end and differences begin:

- Sending and receiving are blocking operations by default, while emitting a molecule is a non-blocking operation in the chemical machine

- Reactions start automatically on the chemical machine, while "goroutines" need to be started manually

- The Chemical Machine encourages data-driven reasoning, while the Go language requires reasoning about processes that run and send messages to each other at a given time

- Contention on which reaction can consume a molecule is defined statically in the chemical machine, while the Go language allows the developer to define new code that contends on reception from previously defined channels

- The chemical machine helps the programmer to expose potentially ambiguous and non-deterministic code, which the Go language does not make apparent

I illustrate these differences by running example code:

- the "select" construction

- the "ping-pong" example from https://talks.golang.org/2013/advconc.slide#6

Code for this presentation: https://github.com/Chymyst/Chymyst/blob/master/src/test/scala/io/chymyst/lab/ChymystGoLang.scala

Main repository for the `Chymyst` library: https://github.com/Chymyst/chymyst-core/

This is a basic conceptual introduction into the Chemical Machine programming paradigm, using the `Chymyst` library for Scala.

Contents:

Explanation of the "chemical metaphor" of concurrent programming

How molecules and reactions model concurrent computations

First example: concurrent counter

Different versions of producer/consumer

Debugging and typical errors in chemical machine programming

Discussion of indeterminism

Simple declarative implementation for the "dining philosophers" problem

Comparison with implementations of the "dining philosophers" problem in other programming languages

This presentation follows chapter 1 of "Concurrency in Reactions", a tutorial book on `Chymyst` and Chemical Machine programming: https://winitzki.gitbooks.io/concurrency-in-reactions-declarative-multicore-in/content/

Main repository for the `Chymyst` library: https://github.com/Chymyst/chymyst-core/

This is a basic introduction into the Chemical Machine programming paradigm, using the `Chymyst` library for Scala.

This tutorial is for people who prefer to dive into code, without much discussion of the underlying concepts.

Here I explain only the bare minimum necessary to be able to start using the Chemical Machine and the `Chymyst` library for Scala. I emphasize the basic, practical aspects of creating molecules, reactions, and reaction sites when writing concurrent programs.

I give several examples, such as a concurrent counter with read access, a counter with blocking access, and a parallel `map` operation.

This presentation follows a chapter from "Concurrency in Reactions", a tutorial book on `Chymyst` and Chemical Machine programming: https://winitzki.gitbooks.io/concurrency-in-reactions-declarative-multicore-in/content/

Main repository for the `Chymyst` library: https://github.com/Chymyst/chymyst-core/

This tutorial implements a simple throttling (or "rate limiter") function for the Chemical Machine.

- Use systematic, logical reasoning to derive the implementation of the throttling functionality from the requirements

- Package the throttling code as a generic function

- Test the code

- Compare with throttling implementations for Akka, Monix, and ZIO

- How the Chemical Machine enables programmers to write declarative ("obviously correct") concurrent code

Code for the presentation: https://github.com/Chymyst/Chymyst/blob/master/src/test/scala/io/chymyst/lab/ChymystThrottler.scala

Main repository for the `Chymyst` library: https://github.com/Chymyst/chymyst-core/

"Concurrency in Reactions", a tutorial book on `Chymyst` and Chemical Machine programming: https://winitzki.gitbooks.io/concurrency-in-reactions-declarative-multicore-in/content/

SoftwareMill blog post, comparing Akka, Monix, and ZIO: https://blog.softwaremill.com/scalaz-8-io-vs-akka-typed-actors-vs-monix-part-1-5672657169e1

What would be a good domain-specific language (DSL) for expressing concurrent and parallel computations in a natural and declarative manner?

The goal is to design a programming language where the programmer can simply say "this computation with this data here needs to be made concurrent" - and obtain automatically parallelized, concurrent code, safe from race conditions.

Starting from examples and proceeding systematically, I derive precisely such a DSL for concurrent programming.

I show that, in order to achieve this kind of high-level, purely declarative concurrency, a programming language needs just two features:

1. A construction that labels data values as "concurrent data", using locally scoped "labels" (which are values of a special type).

2. A construction to define "concurrent functions" that can take labeled "concurrent data" as input and will emit new "concurrent data" as output.

The need for these two features is a logical consequence of our requirement to be able to denote concurrency declaratively, and to achieve automatic yet safe parallelism.

A programming language with these two features is equivalent to the "Chemical Machine", also known as "join calculus" in the academic literature. I demonstrate working test code in Scala using the Chemical Machine via the `Chymyst` library.

I explain how the "chemical metaphor" provides a more visual terminology that helps us understand how "concurrent data" and "concurrent functions" actually work.

Code for the presentation: https://github.com/Chymyst/Chymyst/blob/master/src/test/scala/io/chymyst/lab/ChymystPresentation.scala

Main repository for the `Chymyst` library: https://github.com/Chymyst/chymyst-core/

"Concurrency in Reactions", a tutorial book on Chymyst and Chemical Machine programming: https://winitzki.gitbooks.io/concurrency-in-reactions-declarative-multicore-in/content/

The main topic of this presentation is to illustrate the use of the "free monad" as a design pattern. The free monad can replace imperative code by declarative data structures that are composable and can be "run" or "interpreted" later.

This session shows a straightforward implementation of the free monad and explains how it works in more detail. I also show how to use the `Free` type constructor as implemented in the `cats` library.

Contents:

How to transform an imperative API into a sequence of instructions

Specific example: a subset of API for Apache HDFS file system client (create / read / write / delete files)

Using case classes and a trait with a type constructor `HdfsOps[A]` to represent an imperative API

Showing why the result is not a monad and not even a functor

Motivating a monad (rather than, say, a `List`) as the data structure for a sequence of instructions

Defining the `pure` and the `flatMap` methods on a free monad

Transforming the free monad into an arbitrary target monad `M`, generically

Examples of "interpreters" for the `HdfsOps` free monad: one interpreter for `Writer` monad, another for the `Try` monad

Doing the same thing with the `cats` library

Final example: Encoding some Akka-Http routing directives as a free monad, and what advantages this could bring

Source code: https://github.com/winitzki/scala-examples/blob/master/chapter08/src/test/scala/example/FreeMonadPresentation.scala

https://github.com/winitzki/scala-examples/blob/master/chapter08/src/test/scala/example/AkkaHttpFreeMonad.scala

This short summary gives an overview of the material presented in detail in the corresponding chapter of the functional programming tutorial.

The summary is designed to help you decide whether you want to spend time on the full tutorial, and also to refresh the main points of the tutorial after you already listened to it.

The full recording of chapter 7, part 2 is here: https://www.youtube.com/watch?v=p0fH_adTCnQ

Split into parts:

1 of 2: https://www.youtube.com/watch?v=u_XH7XkvFWM

2 of 2: https://www.youtube.com/watch?v=rKQqdAF9ecA

The full recording of chapter 8, part 1 is here: https://www.youtube.com/watch?v=NVlFZYxgXDw

The full recording of chapter 5 is here: https://www.youtube.com/watch?v=QhqsBgJ8lm8

The full recording of chapter 3 is split into three parts:

Part 1 (The types of higher-order functions): https://www.youtube.com/watch?v=Z_1s36ba4EY

Part 2 (Disjunction types): https://www.youtube.com/watch?v=MTViank6L24

Part 3 (The Curry-Howard correspondence): https://www.youtube.com/watch?v=sSDGjdfFQ-Y

The main topic is to illustrate how several `fold` operations can be combined automatically into a single traversal. In Chapter 8, part 1, the central pieces of code were implemented automatically using the `curryhoward` library. This live coding session shows the implementation and explains how it works in more detail.

Defining a `fold` operation as a data structure and running it afterwards

Combining several `fold` operations into one

Implementing the applicative `zip` operation is possible on `fold`s despite `fold` not being a functor

Adding a final transformation to a `fold`, to make it into a functor

Implementing a DSL for `fold`s so that running average and standard deviation can be expressed concisely

Illustrating the difference between applicative and monadic combinators for `fold`s

Source code: https://github.com/winitzki/scala-examples/blob/master/chapter08/src/test/scala/example/FoldsPresentation.scala

Functional programming in the mathematical spirit.

Long and difficult, yet boring explanations given in excruciating detail.

Chapter 8: Applicative functors and profunctors. Part 1: Practical examples

Why monads do not usually describe effects that are independent or commutative

Intuitions behind the `map2` function, coming from monads

Generalize map, map2, to map3 and mapN

How to implement map2 and map3 for `Either` to collect multiple errors from computations

Why the Future and the Reader monads already have commutative and independent effects

How to transpose a matrix by using map2 with `List`

Profunctors and their distinction from functors and contrafunctors

How to use profunctors to combine several `fold` passes into one

The distinction between applicative `fold` combinator and the monadic combinator: running average of running average

The distinction between applicative parser combinator and the monadic parser combinator: stopping at errors

Exercises

Slides: https://github.com/winitzki/talks/blob/master/ftt-fp/08-applicatives-part1.pdf

Code examples: https://github.com/winitzki/scala-examples/tree/master/chapter08/src

Functional programming in the mathematical spirit.

Long and difficult, yet boring explanations given in excruciating detail.

Chapter 7: Computations in a functor context II. Monads and semimonads. Part 2: Laws and structure of monads

How to derive the laws for flatMap from our intuitions about functor block computations

Deriving the laws for flatten from the laws for flatMap

Why flatten is equivalent to flatMap, and what it means to be "equivalent"

Why flatten has one law fewer than flatMap

How parametricity assures naturality laws

Worked examples showing how to verify the associativity law for all standard monads

Examples of incorrect implementation of flatten that violates the associativity law

Motivation for full monads and laws for the `pure` method

Deriving the laws for `pure` in terms of `flatten`

Reformulating the monad laws in terms of Kleisli functions

A simplified definition of "category" and "morphism"

How category theory provides a conceptual generalization of "lifting"

Deriving the laws of `pure`, `flatten`, and `flatMap` from the laws of Kleisli category

Structure of semigroups and monoids: how to build up semigroups and monoids from parts

Structure of semimonads and monads: building up new monads from previously given monads, functors, and contrafunctors

The recording of the full lecture of Chapter 7, Part 2 is https://www.youtube.com/watch?v=p0fH_adTCnQ but it was too long for YouTube to generate the subtitles. For convenience, I split it into two parts, each now with subtitles. This is the first part of the recording.

Functional programming in the mathematical spirit.

Long and difficult, yet boring explanations given in excruciating detail.

Chapter 7: Computations in a functor context II. Monads and semimonads. Part 2: Laws and structure of monads

How to derive the laws for flatMap from our intuitions about functor block computations

Deriving the laws for flatten from the laws for flatMap

Why flatten is equivalent to flatMap, and what it means to be "equivalent"

Why flatten has one law fewer than flatMap

How parametricity assures naturality laws

Worked examples showing how to verify the associativity law for all standard monads

Examples of incorrect implementation of flatten that violates the associativity law

Motivation for full monads and laws for the `pure` method

Deriving the laws for `pure` in terms of `flatten`

Reformulating the monad laws in terms of Kleisli functions

A simplified definition of "category" and "morphism"

How category theory provides a conceptual generalization of "lifting"

Deriving the laws of `pure`, `flatten`, and `flatMap` from the laws of Kleisli category

Structure of semigroups and monoids: how to build up semigroups and monoids from parts

Structure of semimonads and monads: building up new monads from previously given monads, functors, and contrafunctors

Worked examples with full derivations of laws for most of the constructions

Why certain constructions can be only semimonads but not full monads

Why there cannot be a contravariant monad

Exercises, with examples and counter-examples of semimonads and monads

Slides: https://github.com/winitzki/talks/blob/master/ftt-fp/07-monads-part2.pdf

Code examples: https://github.com/winitzki/scala-examples/tree/master/chapter07/src

This is a series of extensive tutorials on functional programming.

The tutorials cover both the theory and the practice of functional programming.

Applied functional type theory (AFTT) is what I call the subdomain of computer science that should serve the needs of functional programmers who are working as software engineers.

It is these practitioners, rather than the academic researchers, who need to examine the incredible wealth of functional programming inventions over the last 30 years, -- such as these "functional pearls" papers -- and to determine which theoretical material has demonstrated its pragmatic usefulness and thus belongs to AFTT, and which material may be tentatively omitted. This tutorial series is therefore also an attempt to define the proper scope of AFTT.

In the videos, I demonstrate code examples in Scala using the IntelliJ editor because this is what I am most familiar with. However, most of this material will work equally well in Haskell and some other FP languages. This is so because AFTT is not specific to Scala or Haskell, -- a serious user of any other functional programming language will have to face the same questions and struggle with the same practical issues.

Uncopyright

All lecture content is authored by Sergei Winitzki, Ph.D.

The lecture materials are released into the public domain under a CC-0 license, which is an "uncopyright".

Intended audience

The material is presented here at medium to advanced level. It is not suitable for people unfamiliar with school-level algebra, or for people who are generally allergic to mathematics, or for people unwilling to learn new and difficult concepts through prolonged mental concentration.

The first two chapters are introductory and may be suitable for beginners in programming. Starting from chapter 4, the material becomes unsuitable for beginners.

Main features and goals of this tutorial series

an emphasis on the mathematical principles that guide the practice of functional programming

the presentation is self-contained -- introduces and explains all required concepts and Scala features

an emphasis on clarity and understandability of all explanations, derivations, and code

some terminology and notations are non-standard -- this is in order to achieve a clearer and more logically coherent presentation of the material

all mathematical developments are thoroughly motivated by practical programming issues

avoid developing abstract theory for theory's sake

examples must show how each mathematical construction is used in practice for writing code

mathematical generalizations are not pursued beyond either practical usefulness or immediate pedagogical usefulness

each new concept or technique has sample code and unit tests to illustrate its usage and check correctness

currently the libraries cats and scalacheck are used throughout the sample code

each new concept or technique is fully explained via "worked examples" and drilled via provided exercises

answers to exercises are not provided, but it is verified that the exercises are doable and free of errors