Tuesday, December 11, 2018

Porting a simple toy app to Elmish

In the past I developed a toy app/kata (Animalquiz) for learning purposes. First version was Java, then I ported it to F#.

This post is about making the  F# version run under Elmish.

Roughly speaking, an Emish based application consists in a set of possible messages, a model to represent the current state of the application, and an update function that given a message and a model, returns a new model (and a list of other messages,  but I can skip this part and assume that this list will always be returned as empty, that is ok for my purposes).

My original AnimalQuiz was console based (also with a gtk# interface), and it was based on a sort of model-view-update patten, even if I wasn’t aware of it.

So I expect there will not be so much work to do to put it under Elmish.

First step is creating a project based on the Elmish-Fable template.
According to the documentation that you can find in https://devhub.io/repos/kunjee17-awesome-fable
to install the Fable-Elmish-React template you need the following command:
dotnet new -i "Fable.Template.Elmish.React::*"

To create a new project:
dotnet new fable-elmish-react -n awesome

The template created contains three sample (sub)apps: 
  • The “Home” app which shows us a simple textbook where the content will be coped in a text area (using “onchange” property).
  • The classical counter app called “Counter sample”. 
  • The “About” which just shows some static text (which is just static text).

I want to add the AnimalQuiz as a fourth application, so what I'll do is:
  1. make a clone of one of them, and call the cloned app “AnimalQuiz”, 
  2. make the cloned app accessible from the outer menu.
  3. modify the cloned app, to  behave as the AnimalQuiz app.

The app I clone is the “Home” app, so I just copy the three file of it in a new folder called AnimalQuiz. In each module declaration I'll substitute "AnimalQuiz" to "Home".

I have to add the following rows in the in the project file awesome.fsproj, in that order,  to make sure they will be compiled:
<Compile Include="AnimalQuiz/Types.fs" />
<Compile Include="AnimalQuiz/View.fs" />
<Compile Include="AnimalQuiz/State.fs" />

Now the step 2: I want to make extra app reachable by the outer menu, so as follows there are the files that I have to change at it, and how I have to change them:
module App.Types: the message and the model will reference the message and the models of the sub apps:

Module Global (file Global.fs): I extend the Page discriminated union, and the toHash function:

Then I need to change the App.State module (only the row with some animalQuiz string are mine):

Last thing to do is modifying App.View module in two parts.
One is the definition of the menu:

Another one is the definition of the pageHtml internal to the root definition:

This is enough to make the new app visible and selectable from the outer menu.

Now is time to take a look to the actual app, that, until now, is just a copy of Home, but I'm going to change everything there.

The model is the following:

Those data are useful to manage an interaction between the user and the engine, but I am not going to explain all of them, with the exception of the CurrentState, which is of the type State, that describe the different states of the interaction with the user:

What follows are the messages:

The model include also a history of messages. The history of messages is useful because I can do any "replay n steps from the beginning" by applying n times the messages of the history from the initial state using the fold function.

For instance there is an "undo" that applyes the list of the messages from the initial state a nuber of times given by the length of the history minus one:

| (Undo,_) -> 
     let msgHistoryTruncated = model.MessageHistory |> List.take (List.length model.MessageHistory - 1)
     msgHistoryTruncated |> List.fold (fun (acc,_) x -> update x  acc   )  (initState,[])

repo is here https://github.com/tonyx/animalquiz-elmish

Wednesday, September 26, 2018

Transforming a nested F# list to Javascript using tail recursion

If I need some javascript in a Suave view, I can include a reference to an external javascript like in the following code:

script ["type","text/javascript"; "src","/externalJavascript.js"] []

Another way to insert javascript code is by adding the code directly by the Raw tag:

script [] [Raw("alert('hello');")]

I can use both way to pass parameters to the external javascript file:

script [] [Raw("var myVar = 123")];
script ["type","text/javascript"; "src","/externalJavascript.js"] []

In this way, the externalJavascript.js may access to the myVar variable as a global variable.

This can be a "hack" to pass parameters to the javascript.

The reason for doing it is being able to make the html page richer, avoiding "round trip" with the server. In this case I wanted to be able to  to add and remove dinamically some element to the options list, on the base of the element selected in another select option list.

However using this way to pass parameters, means generating the javascript code as a string.

An example is if I have a nested list in F#, as follows:

[(1,[(1,2M);(3,4M)]);(11,[(11,22M);(33,44M)] ) ]

and I want to transform in an equivalent javascript map:

new Map([[11, new Map([[11,22],[33,44],])],[1, new Map([[1,2],[3,4],])],]);

From the web page perspective, the key of the map (11) is the element that, when selected in the first select option list, will change the content of the second select option list, populating it with its value, which is a list with two pairs: (11,22), (33,44)

(as you can imagine, such data comes from a database, and is generated by a combination of map operations):
I tried to solve using regular expressions, but the way regular expressions changes from unix (vim) style and the dot.net libraries is annoying for me. So I decided that  a tail recursive function could be cleaner and better, so I come up with the following:

I'm sure there are cleaner ways to enrich the view, for instance using frameworks like Fable and Elmish, but I haven't found yet the time to try to figure out how to integrate them.

So at the moment, the solution is just using F# with Suave.IO (and Postgres), adding some "tricks" if I need to make the page more dynamic on the client side.

EDIT: instead of using a recursive function, I can just use the fold operator:
for instance the first inner function can be rewritten as follows:

let javascriptListPairs l = l |> List.fold (fun acc (x,y)  -> acc + "["+(string)x+","+(string)y+"],") ""

Friday, July 20, 2018

checkboxes with Suave.IO, using dotliquid tamplates

Using checkboxes with Suave.Io

The Suave.IO web library for F# adopts two ways of rendering html.

By using Suave.form/Suave.html the page is represented as a native F# list, which basically consists in html elements, or nodes, and forms which are defined by specific records. Invalid html page is impossible in this way because the syntax check is strict.

Suave.dotliquid instead adopts old fashion html text + some variable substitution at server side.

However, Suve.Form/Suave.html misses support for checkboxes yet.

We can't do the following stuff to render a checkbox form, whereas we can do if we used other supported types for html widget (see https://theimowski.gitbooks.io/suave-music-store/content/en/form_module.html)

Defining the form.

type UserEdit = {
    Enabled: bool
    CanVoidOrder: bool
    CanManageAllorders: bool
    CanChangeThePrices: bool

let userEdit: Form =
   Form ( [
            Checkbox ((fun f -> <@ f.Enabled @>), [])
            Checkbox ((fun f -> <@ f.CanVoidOrder @>), [])
            Checkbox ((fun f -> <@ f.CanManageAllorders @>), [])
            Checkbox ((fun f -> <@ f.CanChangeThePrices @>), [])

REnd the form in the ui:

       { Form = Form.userEdit
         Fieldsets =
             [ { Legend = "Modify user"
                 Fields =
                       { Label = "enabled"
                         Html = Checkbox (fun f -> <@ f.Enabled @>) ["value", true]  )
                       { Label = "can manage all orders"
                         Html = Checkbox (fun f -> <@ f.CanVoidOrder @>) ["value", true]  )
                       { Label = "can void order"
                         Html = Checkbox (fun f -> <@ f.CanManageAllorders @>)["value", true]  )
                       { Label = "can change the prices"
                         Html = Checkbox (fun f -> <@ f.CanChangeThePrices @>)["value", false]  )

         SubmitText = "modify" }

This is probably how it will be possible in the future, but is not possible yet. bool and chekbox are not supported.

So I needed dotliquid templating to overcome, where any html is valid, because it is actual html plain text code.

A recap of checkboxes in html:

<form method="post">
<input name="myName1" nbsp="" type="checkbox" />myValue1</input>
<input name="myName2" nbsp="" type="checkbox" />myValue2</input>
<input type="submit" update="" />

In posting this form, it evaluates myName1 to myValue1 if checked, and  myValue2 to myValue2  if checked.

So what I we can do to use dotliquid forms for checkboxes is using option types. I don't really care about the value passed, but only if they are passed or not. For that reason option types fit the purpose.

type UserEdit = {
    Enabled: string option
    CanVoidOrder: string option
    CanManageAllorders: string option
    CanChangeThePrices: string option

//wa can leave the following empty (no validators an so on)..
let userEdit: Form =
   Form ( [

So the code in App.fs would be like:

type modelForLiquid = {name: string} // how to pass parameters to the template, if needed

let manageUser (Db.user) =
choose [
     GET  >=>
          let o = {name = "yourname"}
          DotLiquid.page("template.html") o
     POST >=>
          bindToForm Form.userEdit (fun form ->
              let ctx  = db.getContext()
              let isEnabled = form.Enabled.isSome
              let canVoidOrder = form.CanVoidOrder.isSome
              Db.UpdateUser user.UserId isEnabled canVoidOrder ... ctx


Something similar should be needed for radiobuttons.

What we need to be sure about is that we use POST rather than GET in the form, and how html pass the parameters. Basically they are a list of name value pairs.
The POST part of our code in the controller (App.fs usually) can handle them in a straightforward way as seen in the code above.

Can be good to know, just in case ;-)

Wednesday, March 22, 2017

criteri di priorità per backlog item, basati su misure di valore e costo

In questo post accenno ad un criterio di ordinamento per priorità di "product backlog item" considerando alcune possibili misure del valore e del costo, e la loro incertezza.


In metodi di organizzazione del lavoro di tipo "leggero", empirico  (definiti "agili" - come Scrum, Xp, Kanban) usati nello sviluppo di prodotti principalmente software, si ha un insieme chiamato "product backlog" costituito da "product backlog item" (p.b.i.) che specificano piccole funzionalità le quali, tutte insieme, una volta realizzate, costituiranno il prodotto intero.

Per fare un esempio, se il prodotto è un un word processor, si può avere tra le sue funzionalità, cioè tra i suoi p.b.i.:
- poter sottolineare,
- "" mettere in grassetto,
- "" creare elenchi numerati,
- "" giustificare il testo in vari modi,
- "" salvare e leggere in vari formati, e così via...

Questi metodi "leggeri" prevedono che il prodotto sia utilizzabile anche in ogni sua versione intermedia, parziale (cioè che implementi un sottoinsieme di p.b.i. rispetto al backlog).
Quindi è importante che i p.b.i. siano il più possibile indipendenti tra loro (non si dà il caso che per lavorare su uno bisogna prima averne completato un altro, anche se in alcuni casi è comunque inevitabile).

Un p.b.i. portato a termine, deve essere effettivamente rilasciabile, funzionante, usabile, testabile, e testato. Il test dovrebbe essere ripetibile in modo automatico, così da poterlo rieseguire rapidamente, per esempio per rilevare eventuali errori di regressione in successivi rilasci (nel caso nuove aggiunte compromettano funzionalità precedenti). Se i test non fossero automatici, sarebbe difficile fare piccoli rilasci per incrementare le funzionalità del prodotto, perché i test di regressione manuali si accumulerebbero, e rallenterebbero il tutto (a meno che non si sia in grado di affidarsi ad altri metodi rapidi per garantire la non regressione, tipo test massivi fatti magari da una certa fetta di utenti).

Un p.b.i. completato si chiama "running tested feature" (r.t.f.),  nel caso di un prodotto software.

Priorità per valore

In genere i p.b.i. su cui lavorare per prima, cioè per i rilasci delle prime iterazioni, sono scelti in base ad un ordine tale che quelli di valore maggiore (e costo minore, vedi sezione successiva) abbiano la priorità.

Il motivo dovrebbe essere ovvio, ma se non lo fosse, una giustificazione è la seguente.

Immaginiamo che si faccia il contrario, cioè che si portino a termine prima i p.b.i. più marginali, ininfluenti.
In tal caso il valore totale erogato nel corso di un certo tempo, e in modo particolare all'inizio del ciclo di vita, sarà minore del valore totale erogabile nello stesso tempo. Allora i finanziatori del progetto potrebbero percepire che questo abbia uno scarso valore nel complesso, e quindi decidere di chiuderlo.

Dunque è meglio selezionare gli item di maggiore valore per prima.

Per "misurare" questo valore occorre fare dei ragionamenti diversi caso per caso.
Faccio un esempio:

Diciamo che il prodotto è un portale, e vogliamo che gli utenti possano interagire se "riconosciuti", per esempio tramite autenticazione da parte di un provider esterno in cui abbiano già un account, che può essere un social network come Google Plus o Facebook.

Più utenti si attirano e più si genera valore (per esempio attraverso l'aumento del numero atteso di click sugli ad pubblicitari).
Inoltre un utente autenticato, quindi "riconosciuto", porta a maggiori probabilità di click sugli ad perché questi sono più mirati.

In una valutazione a grana grossa, si può dire che la scelta che genera più valore è semplicemente quella dell'autenticazione tramite il social che ha un maggior numero di utenti iscritti
Questa valutazione la vedo accettabile  se assumo che:
. la propensione a cliccare un ad pubblicitario da parte di un particolare utente sia indipendente dal tipo di social network in cui abbia un account
. la probabilità di "atterrare" sul nostro portale e decidere di interagirvi sia indipendente dal social network a cui si appartiene.
(se ho motivo di ritenere che queste assunzioni siano lontane dal vero, devo fare un ragionamento più complicato, che include certe correlazioni).

La scelta più razionale è scegliere prima di lavorare sul b.p.i. che ci aspettiamo generi più valore (a parità di costo di implementazione).

Inoltre questa scelta, non solo è preferibile, ma magari genera un tale numero di utenti da rendere praticamente inutile cercarne di altri implementando poi l'autenticazione tramite altri social network (se non esistono altri criteri di "misura del valore" che inducano comunque a farlo, come motivi di equità o raccomandazioni da parte di autorità antitrust etc...).

Come conclusione di questa piccola analisi:
. è utile scegliere prima i p.b.i. che generino un maggior valore (anche rapportato al costo, come dirò dopo)
. serve un criterio per misurare questo valore (che non sempre è economico, o almeno non direttamente)
. il criterio di misura deve esser accurato quanto basta. Per esempio nel caso in questione, non serve sapere quanti utenti ha l'uno o l'altro social network, basta sapere quale dei due ne ha di più
. il modello in base al quale misuro questo valore deve essere semplice, se so che un modello più complicato (che tiene conto di eventuali correlazioni) aggiungerebbe una accuratezza di cui posso fare a meno.

Tener conto delle correlazioni significa assumere che il maggior numero di potenziali click sugli ad lo misuro non solo in base al numero di utenti ma anche alla maggiore o minore propensione a cliccare gli ad dell'uno o dell'altro, ovvero dal tipo di social network da cui provengono. Così la misura sarebbe stata più complicata, ma più accurata.

Non dovrebbe valere solo il beneficio, ma anche il costo nel criterio di scelta della priorità.

Priorità per costo

Per costo si intende sostanzialmente il numero di giornate di lavoro, ovvero il tempo che pensiamo ci vorrà.

Se per me (per il team di lavoro) è più difficile, complicato, lungo implementare l'autenticazione Google Plus rispetto a Facebook, mentre il valore è equivalente, allora è razionale scegliere Facebook (a parità di valore).

Tra l'altro, se scegliamo il p.b.i. "autenticazione tramite Facebook" che è più rapido, allora questo inizierà a produrre valore in tempi più brevi, sarà costato di meno, e forse genererà valore anche di tipo informativo (per esempio consentirà di verificare quanto aumenteranno i click sugli ad, e se varrà la pena cercare di aumentarli, implementando poi anche un altro p.b.i. simile).

Unendo insieme i due ragionamenti fatti, la priorità dipende da un confronto tra i rapporti valore/costo.

In definitiva i criteri razionali saranno del tipo: scegliere A se probabilmente ha un miglior rapporto valore costo rispetto al terminare B.

Ma si tratta di un confronto tra valori ipotetici.
Sappiamo che potrebbe rivelarsi errato (era meglio scegliere B piuttosto che A) ma questo non potremmo mai verificarlo se non, magari, a posteriori. Però è possibile attribuire un grado di confidenza alla valutazione che facciamo.

Stimatori calibrati

Possiamo dare un grado di fiducia nella stima del confronto (maggiore o minore) tra due valori non noti.

Per esempio 50% significa assoluta mancanza di confidenza: esprimersi in merito per noi è come tirare una moneta.
100% significa assoluta certezza.

Uno "stimatore calibrato" è chi sia in grado di dare un grado di fiducia a ciò su cui si esprime (tra due scelte alternative) che corrisponda alla probabilità di avere ragione (usando un criterio di probabilità "soggettivo").

Quindi se lo stimatore calibrato sostiene che ne è sicuro al 60%, allora la probabilità sarà del 60%, o meglio - siccome parliamo di probabilità soggettive - sarà conveniente, rispetto alle conoscenze che ha lo stimatore, una scommessa per cui la vincita corrisponda ad almeno 10/6 della somma giocata. (vedi "gioco equo")

Diventare uno "stimatore calibrato" è una capacità che si può imparare.

In sostanza lo stimatore calibrato non è chi abbia un alto grado di fiducia in una scelta tra due possibilità, ma chi sappia far coincidere tale grado di fiducia con la probabilità di aver ragione (edit: chi, sulla base di criteri razionali dell'agente giocatore, sia disposto a fare una scommessa con ammontare della vincita non inferiore a quella equa rispetto al livello di fiducia espresso).

(per estensione poi il concetto di stimatore calibrato si applica anche alla "stima" di valori puntuali, dove si tratta di individuare "intervalli di confidenza del 90%", ma questo è un approfondimento a parte).

I dubbi su questo eventuale modello decisionale per la priorità, come descritti qui, sono:
- gli esempi presentati sono semplici per motivi di comunicazione, ma certi casi concreti possono essere ben più complicati.
- nell'esempio ho trattato i confronti tra valori e costi separatamente, evitando di mostrare la decisione basata sul confronto dei rapporti tra i due (sempre per evitare di complicare troppo il ragionamento)
- ultimo, ma non per importanza, parliamo di sistemi che reagiscono alle previsioni che lo riguardano: misurare il valore di una cosa come il tempo che ci metterò per fare una cosa, ne altera il valore stesso, e non è detto che sia un bene (stime ottimistiche possono indurre ad una certa "fretta" che poi incide negativamente sulla qualità). 
Questo aspetto in parte è comunque mitigato dal fatto che non si tratta di "previsioni puntuali", ma basate su un confronto, cioè
ho affermato che ci si basa sul fatto che A < B, senza dover dire quanto misura A né quanto misura B in termini numerici di qualsiasi tipo.


Un modello di scelta razionale basata su "stime calibrate" nel confronto tra p.b.i. può essere in grado di orientare certe scelte di priorità nell'ambito di metodi di sviluppo/lavoro/progetto cosiddetti "agili".

rif. How To Measure Anything - Douglas W. Hubbard
gioco equo: https://it.wikipedia.org/wiki/Gioco_equo

Sunday, October 9, 2016

Simple decision trees using xml in F# and Fsharp.Data.Xsd package

This post is about new xml based "Open" and "Save" features, and is the follow up of my latest post about "animal quiz" game
 (http://tonyxzt.blogspot.it/2016/09/adding-simple-gtk-gui-in-console-f.html). The "game" is a simple "dialog based yes-no decision tree builder" based on console or on gtk (enabled by the "gui" parameter on command line) written in F# (under Mac Os X using Mac/Xamarin/Visual Studio Code+Ionide).

Sources are on github: https://github.com/tonyx/fsharpanimalquizgtk

Here I share some ideas about the solution I implemented for those "Open" and "Save" features.

I used an Xml/Xsd  package available via nuget PM> Install-Package FSharp.Data.Xsd

(webpage https://giacomociti.github.io/FSharp.Data.Xsd )

For me a convenient way to represent the tree in Xml for me is represented by the following examples.

Single node decision tree containing just one element


A two leaves decision tree with a single yes-no question on the root to distinguish those two leaves (i.e. single animal) nodes:

        is it big?

The following xml schema can validate the previous instances (and more of course, that I verified by some NUnit based tests )

<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"
    <xs:element name="node">
                    <xs:element name="animal" type="xs:string"/>
                        <xs:element name="question" type="xs:string"/>
                        <xs:element name="yesBranch">
                                    <xs:element ref="node"/>
                        <xs:element name="noBranch">
                                    <xs:element ref="node"/>

(I don't know if this implementation is the simplest possible, but I know that it works)

According to this schema, a node is a sequence of one of the two choices:
a single element named animal, or a sequence of following three element:
-a question (to distinguish categories of animals by yes or not)
-a "yesBranch" (which contains the subtree related to the "yes" answer to that question)
- a "noBranch" (which contains the subtree related to the "no" answer to that question

yesBranch and noBranch of course use recursivity so they contain again the "node" element, which is the root of this xml type.

I need to wrap xml tree from and to the internal representation of the tree. They   are logically equivalent (though field names may slightly differ from the corresponding xml elements, and that is still ok for me):

type KnowledgeTree = AnimalName of string | SubTree of Tree
and Tree = {QuestionstringYesBranchKnowledgeTreeNoBranchKnowledgeTree}

Wrapping internal tree representation to the corresponding xml string is based on producing plain xml text recursively:

let rec treeToXml tree =
    match tree with 
        | AnimalName name -> "<node><animal>" + name + "</animal></node>"
        | SubTree {Question=questionYesBranch=yBranchNoBranch=nBranch } ->  
           "<node><question>" + question + "</question>" + "<yesBranch>" + treeToXml yBranch + "</yesBranch><noBranch>" + treeToXml nBranch + "</noBranch></node>"

(this function does not depend on the Schema: if I want change the schema, I need to change treeToXml accordingly, few lines of code anyway)

To wrap the xml representation to an object, I followed the official documentation which introduces a XmlProvider type which needs to see the schema.

For instance I have

 type KnowledgeBaseXmlSchema = XmlProvider<Schema="""
        <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
            elementFormDefault="qualified" attributeFormDefault="unqualified">
            <xs:element name = "node">


and then use the Parse method from the "KnowledgeBaseXmlSchema" as follows:

let xmlWrapped = KnowlegeBaseXmlSchema.Parse """<node>

Now the xmlWrapped object is coherent, in term of its fields and their types, with the xml datatype as defined by the schema. Particularly optional nodes are represented by option types.

Let's see go a little bit through this:
The Animal got from the KnowledgeBaseSchema.Parse is option because it follows the element in the schema which says it is optional as well. The editor Visual Studio Code+Ionide help detects statically this kind of things.

The other way around is true. If an element is mandatory, it is not "wrapped" into an option anymore:
so in the following lines of code, the myNode is not option, but its type is
XmlProvider<...>.Node,  and in fact is a mandatory element according to the xml schema.

All that said, let's take a look to the xmlToTree function:

let rec xmlToTree (tree:KnowledgeBaseXmlSchema.Node) =      
    match tree.Animal with
    | Some x -> AnimalName x
    | None ->  match (tree.Question,tree.YesBranch,tree.NoBranch) with 
      | (Some y1,Some y2,Some y3) -> SubTree {Question=y1YesBranch=xmlToTree y2.Node ; NoBranch = xmlToTree y3.Node }
      | _ -> failwith "error wrapping xml tree"  

I had to specify the type (KnowledgeBaseXmlSchema.Node) of the parameter, tree, because the type inferences does not work in some cases (and this is one of these cases)

(such thing is not terrible, because in the rare case type inferences won't work, I can just inspect the parameter type we are going to pass to it and then modify the function signature accordingly)

As I mentioned, the "Animal" is an instance of an option type so I have to pattern match it by the "Some X" syntax, and if it the match ha success, then the corresponding internal tree can be simply instantiated by:


(single node tree)

If the root is not an animal (leaf node), then it should match the "None" part of the pattern matching instead, and in that case I assume that there must be a sequence of the following elements: a Question, a YesBranch and a Nobranch which are options as well.
So I try to match them as a triplet like (Some y1, Some y2, Some y3) and simply build a "SubTree" represented by the Question and the recursive call of the same function to the yesBranch and noBranch subnodes.
(note that this implementation is recursive, but it is not tail-recursive)

(Nunit tests are available in this file on github if needed to play more with what I share in this post https://github.com/tonyx/fsharpanimalquizgtk/blob/master/fsharpAnimalQuizKata/XmlTests.fs)

Conclusion is:
- options types are useful
- the fact that optional nodes are represented by option types by using an xml/xsd processing library is very useful
-kudos to static analysis tool provide by editors. They helps checking those things before compiling or testing.

About possible new feature I may add to this toy project:
-adding scrollbars on the main window (to easily visualize trees when they will grow)
-indenting xml output files, to make them more human readable
-better error handling in xml processing, and asking confirmation in case of re-writing existing file, for obvious reasons.

Thanks for reading. Enjoy.

Saturday, September 24, 2016

Adding simple gtk# gui to a console F# application

In this post I am quickly talking about how I added a simple gtk# interface to a program I've done some times ago as an exercise in F#.
(Animal Quiz kata: http://tonyxzt.blogspot.it/2014/02/animal-quiz-problem-in-f.html)

The first version was command line based, and was developed with extensive use of unit tests and interaction tests (using Rhino framework).

That version is still incorporated in this new version (git: https://github.com/tonyx/fsharpanimalquizgtk) except that it is possible to run in a gui gtk# wideget adding the paremeter "gui" to the command line.

For instance, under mono, I will run it as
> mono fsharpAnimalQuizKata.exe gui

and the following window will appear:

The interaction with the user is still text based, as in the console based version: the user dialogues with the "engine" using the textbox and the label of the textbox, instead of the standard input/standard output.

Now the knowledge tree can be visualised, hidden, expanded by using the three buttons available.

The knowledge base starts from as single node with the animal "elephant" in it.

An example of interaction to make the engine add a new node with a "cat" is:

After this dialogue, the knowledge tree can be expanded and we can see the following:

Now the loop starts again, and in a new dialogue we can feed the engine to make it store a new node "ant", which is small and is an insect, so expanding the knowledge tree we can see the following:

Essentially the logic of using gtk# in F# is the same as using it in C#.
I've found that translating existing C# based tutorials of gtk# in F# requires:
- declaring as mutable any objects that will be reassigned.
- when event handling in C# has the form:

// adding

// adding handler
btn.Clicked += on_btn_clicked;

// declaration of the handler
void on_btn_clicked (object obj, EventArgs args) {
// method body

An equivalent in F# is 

// adding handler
do btn.Clicked.AddHandler(fun i e -> this.on_btn_clicked(i,e))

// declaration of the handler
member this.on_btn_clicked(i,e: EventArgs) =
// method body

Using Monodevelop/Xamarin I can create an empty F# gtk# template project that starts with an empty window. In my experience starting from that, and consulting any C# gtk#  tutorial to add buttons, menus etc... is good enough to play with gtk# gui.

(I'm sure they it should work also in .net and Windows, but I have not tried yet.)

To "wrap" a knowledge tree to a gtk# TreeStore view I used this function

let rec treeToTreeStore tree (storeTreeStore) iter  prefix =
    match tree with 
        | AnimalName name -> 
             do store.AppendValues(iter,[|prefix + name|]) |> ignore 
        | SubTree {Question=questYesBranch=yBranchNoBranch=nBranch } -> 
             let innerIter = store.AppendValues(iter,[|prefix + quest|])
             do treeToTreeStore yBranch store innerIter "YES: " 
             do treeToTreeStore nBranch store innerIter "NO:  "

Just as a reminder. A knowledge tree type is defined as follows:

type KnowledgeTree = AnimalName of string | SubTree of Tree
and Tree = {QuestionstringYesBranchKnowledgeTreeNoBranchKnowledgeTree}

which means that a knowledge tree is a leaf node constituted by a string (the name of the animal) or is a yes/no question and two sub trees (one leading to the animals for whom the answer is "yes" and another to the animals for whom the answer is "no").

I found reasonable to make extensive use of automated testing of the console based version by mocking the standard input and standard output, and avoiding adding tests related to the gui.

Mocking standard output and standard input requires only making those kind of abstractions

type OutStream =
    abstract Writestring -> unit

type InStream =
    abstract Inputunit -> string
that will be implemented in the real application as follows:

type ConsoleOutput() =
    interface OutStream with
       member this.Write(x) = printf "%s\n" x

type ConsoleInput() =
    interface InStream with
        member this.Input() = Console.ReadLine()

whereas mocks can be used to simulate input/output interaction (by simulating that the user write some input values, and adding expectations about values that the engine will write)

    let ``with a one leaf node knowledge tree, engine asks if it is the animal on that node, the user says yes, and so the engine answer with "yeah!"``() =
        // setup 
        let mockrepo = new MockRepository()
        let inStream = mockrepo.StrictMock<InStream>()

        let outStream = mockrepo.StrictMock<OutStream>()

        // setup expectations
        Expect.Call(outStream.Write("think about an animal")) |> ignore
        Expect.Call(inStream.Input()).Return("whatever") |> ignore

        Expect.Call(outStream.Write("is it a cat?")) |> ignore     
        Expect.Call(inStream.Input()).Return("yes") |> ignore

        Expect.Call(outStream.Write("yeah!")) |> ignore 
        Expect.Call(inStream.Input()).Return("anything") |> ignore

        // arrange
        let tree = AnimalName "cat"
        let initState = Welcome
        let numOfLoops = 3

        // act
        runUntilIndex outStream inStream tree initState numOfLoops |> ignore

        // verify expectations

I guess it would be more complicated to make a similar mechanism of mocking the gui as well, with less value, because the gui is just a thin layer upon the engine which is already extensively tested.

Special mention to Visual Studio Code and  Ionide as editor with some autocompletion features that are missing in Monodevelop/Xamarin.

Future work could be related to
-saving/loading trees, and
-editing tree nodes in the tree view.


Friday, March 21, 2014

Venn diagrams and Categorical Propositions in F#

Categorical propositions express specific relations between elements of different sets, in term of quantifier which can be Universal (like All, or None), or Particular (Some); and positive or negative ("is" or "is not").

Examples: All Mammals are Animals, No Mineral is Alive, Some people are Smart, Some people are Not Smart.
In this post I am going to show some code in F# I wrote, representing categorical propositions, and particularly, related to the union of them, with an internal structure that reflects the representation based on Venn diagrams.
This can be used to verify in an automated way the compatibility between different propositions, in the same way we use simple rules by Venn diagrams.
Example: "All S is P" and "Some S is not P" are not compatible. Venn diagrams show in this case that the two diagrams can't be overlapped.
The area of the diagrams can only be blank, can have a '*' or can be blackfilled, but we can't have '*' (meaning: there is something here), and a black area (meaning: there is nothing here), at the same time.

The building blocks are the four basic form of categorical propositions, and are classified as Affirmative or Negative, and Universal or Particular, as follows:

They have representation based on Venn diagrams:

Venn diagrams in this case consist on two circles indicating two sets which are partially overlapped, so they form three regions. The left region indicates what is in S that is not in P, in the middle region what is in common between them, and on the right what is in P that is not in S.
To indicate that a region is empty we have to black-color it, and to indicate that in a region there must be something in it we have to put an asterisk in it. It is not possible to black-color and put an asterisk to a region at the same time because it will lead to a contradiction (like "All S is P" and "Some S is not P" which is impossible).

Symmetric basic form diagrams (E and O) represent proposition that are valid also on the reverse: so Some S is P means also Some P is S, and No S is P means also No S is P.

As long as we avoid placing asterisk and black-color in the same area (which leads to a contradiction) we can have more complex and valid (consistent) diagrams than the basic forms, which represent more proposition for example as follows:

This diagram means: Some S is not P, No S is P, No P is S, and Some P is not S.

I want to write some F# code solving the problem of translating any Venn diagram like the previous one, involving two sets, in the list of all the categorical propositions that follow from it.

I'll use records to represent either the diagram and either a categorical proposition.
To explain what I will get, this is a possible automated NUnit test related to the "A" proposition:

    let ``A Venn diagram to proposition``()=
        let allSIsP = {S=BlackFilled; SP=Empty;P=Empty}
        let expectedProposition = [{quantifier1=All;category1=S;appartenence=Is;category2=P}]
        let actual = VennToPropositions allSIsP

I need to define:
a type for any diagram involving two categories, which will be a record:

type twoSetsVennDiagram = {S: SectionStatus; SP: SectionStatus; P: SectionStatus}

A "SectionStatus" to set each of its region with the appropriate status:

type SectionStatus = Empty | BlackFilled | Starred

A "Propositions" type (again, as a record):

type twoTermsProposition = {quantifier1: Quantifier; category1: CategoryRef; appartenence: Appartenence; category2: CategoryRef}

Quantifier will be:

type Quantifier = All | Somes | No

("Some" is an F# keyword, so to avoid confusion I chose "Somes")

Appartenence (i.e. "belonging") will be:

type Appartenence = Is | IsNot

CategoryRef will simply be a possible name of the set involved:

type CategoryRef = S | M | P 

("S", "M", "P" are typical names used to model categorical propositions and syllogisms  - which I'm not going to write about this time, so they are just "typical names" given to sets involved in categorical forms. For instance in the classical syllogism "Socrates is human, all humans are mortal, Socrates is Mortal, S is Socrates (or "everything that is Socrate"), M is "everything that is human" and P is "everything that is mortal").

The implementation aimed just to pass this test is like:

let VennToPropositions diagram = [{quantifier1=All;category1=S;appartenence=Is;category2=P}]

I expect that I need a sort of lookup table for basic diagrams (with only one non empty region, having BlackFilled or Starred state), and that I can one step a time add an element to such lookup on the base of adding a new test against the case:


let VennToPropositions diagram =

    match diagram with 

        | {S=Empty; SP=Empty;P=Empty} -> []

        | {S=BlackFilled; SP=Empty;P=Empty} ->  [{quantifier1=All;category1=S;appartenence=Is;category2=P}]

        | {S=Empty; SP=Empty;P=BlackFilled} ->  [{quantifier1=All;category1=P;appartenence=Is;category2=S}]

        | {S=Empty; SP=BlackFilled;P=Empty} ->  [{quantifier1=No;category1=S;appartenence=Is;category2=P};{quantifier1=No;category1=P;appartenence=Is;category2=S}]

        | {S=Empty; SP=Starred;P=Empty} -> [{quantifier1=Somes;category1=S;appartenence=Is;category2=P};{quantifier1=Somes;category1=P;appartenence=Is;category2=S}]

        | {S=Starred; SP=Empty;P=Empty} -> [{quantifier1=Somes;category1=S;appartenence=IsNot;category2=P}]

        | {S=Empty; SP=Empty;P=Starred} -> [{quantifier1=Somes;category1=P;appartenence=IsNot;category2=S}]

Basically the cases that are considered in this "lookup" table are the four basic forms, their "reverse", and the completely empty one, and the reverse of the ones that are not symmetrical:

But the match against more complex diagrams is missing, as the following:

I can see that this diagram is the union of the following basic forms:

The resulting proposition is the union of them.

(Extreme case:  when all the region are filled.
The result may looks weird: "All M is P - No M is P - No P is M  - All P is M", but it can be sound anyway, because it applies when M and P are both empty: the "All" quantifier does not commit to existence, but the "Some" does).

The related test to a function that decomposes it could be as follows:

    let ``complex decomposition``()=
         let SomeSIsNoP = {S=Starred; SP=BlackFilled;P=Starred}
         let actual = basicCategoricalDecomposition SomeSIsNoP
         let expected = [{S=Starred;SP=Empty;P=Empty};{S=Empty;SP=BlackFilled;P=Empty};{S=Empty;SP=Empty;P=Starred}]

So, any diagram that is more complex than those six forms, can still be decomposed in basic forms, then they can be matched returning the basic propositions, and then all the propositions can be joined together.

The basicCategoricalDecomposition function is:


let basicCategoricalDecomposition diagram =

        match diagram with

            | {S=s_pattern; SP=Empty;P=Empty}  when s_pattern <> Empty -> [{S=s_pattern; SP=Empty;P=Empty}] // A or O

            | {S=Empty; SP=sp_pattern;P=Empty} when sp_pattern <> Empty -> [{S=Empty; SP=sp_pattern;P=Empty}]    // E or I  

            | {S=Empty; SP=Empty;P=p_pattern} when p_pattern <> Empty -> [{S=Empty; SP=Empty;P=p_pattern}]

            | {S=s_pattern; SP=sp_pattern;P=Empty} -> [{S=s_pattern; SP=Empty; P=Empty};{S=Empty;SP=sp_pattern;P=Empty}]

            | {S=Empty; SP=sp_pattern;P=p_pattern} -> [{S=Empty; SP=sp_pattern; P=Empty};{S=Empty;SP=Empty;P=p_pattern}]

            | {S=s_pattern; SP=Empty;P=p_pattern} -> [{S=s_pattern; SP=Empty; P=Empty};{S=Empty;SP=Empty;P=p_pattern}]

            | {S=s_pattern; SP=sp_pattern;P=p_pattern} -> [{S=s_pattern;SP=Empty;P=Empty};{S=Empty;SP=sp_pattern;P=Empty};{S=Empty;SP=Empty;P=p_pattern}]

            | _ -> []


So far if the VennToProposition matches a basic diagram it works as a lookup table.
If I pass is is a more complex diagram, then any match fails, and so I add a default matching rule that just calls the above basicCategoricalDecomposition diagram by a recursive call to get the union of all the propositions.

I rewrite here the complete VennToProposition function:


let rec VennToPropositions diagram =

    match diagram with 

        | {S=Empty; SP=Empty;P=Empty} -> []

        | {S=BlackFilled; SP=Empty;P=Empty} ->  [{quantifier1=All;category1=S;appartenence=Is;category2=P}]

        | {S=Empty; SP=Empty;P=BlackFilled} ->  [{quantifier1=All;category1=P;appartenence=Is;category2=S}]

        | {S=Empty; SP=BlackFilled;P=Empty} ->  [{quantifier1=No;category1=S;appartenence=Is;category2=P};{quantifier1=No;category1=P;appartenence=Is;category2=S}]

        | {S=Empty; SP=Starred;P=Empty} -> [{quantifier1=Somes;category1=S;appartenence=Is;category2=P};{quantifier1=Somes;category1=P;appartenence=Is;category2=S}]

        | {S=Starred; SP=Empty;P=Empty} -> [{quantifier1=Somes;category1=S;appartenence=IsNot;category2=P}]

        | {S=Empty; SP=Empty;P=Starred} -> [{quantifier1=Somes;category1=P;appartenence=IsNot;category2=S}]

        | _ -> List.fold (fun acc item -> acc @ (VennToPropositions item) ) [] (basicCategoricalDecomposition diagram)


This last line is the one which decomposes in basic forms and then call recursively the VennToPropositions.

So far, any valid two sets Venn diagram corresponds to a set of one or more basic categorical propositions consistent each other.
Using the Fsharp Repl it is possible to test immediately the code, or it is possible to write new tests.
If needed, the sources are available on gitHub.

The page 184 of "Understanding Arguments" provides problems that can be solved using this "VennToPropositions" function (but of course they are not supposed to be solved with an automated tool).

Possible extension can be about evaluating categorical syllogisms, i.e. particular inferences involving two categorical propositions and three sets, but it can be something for another post.