Planet Smalltalk

June 14, 2019

Cincom Smalltalk - Smalltalks 2019 – Call for Sponsors

The Smalltalks 2019 Conference will be held at Universidad Nacional del Comahue, in Neuquén, Argentina. The main event will take place on November 13-15 with the workshops, tutorials and introductory […]

The post Smalltalks 2019 – Call for Sponsors appeared first on Cincom Smalltalk.

Suslov Nikolay - Krestianstvo Luminary for Open Croquet architecture and Virtual World Framework in peer-to-peer Web

Everyone who is familiar with Croquet architecture are anticipating (waiting breathless) the updates for Open Croquet architecture from Croquet V by David A. Smith and Croquet Studios!

However, while working on LiveCoding.space project by Krestianstvo.org that is heavily based on Virtual World Framework (containing elements of Open Croquet architecture), I have started revising the current Reflector server.

Let me introduce to you an ideas and early prototype of the Krestianstvo Luminary for Open Croquet architecture and Virtual World Framework. 
Krestianstvo Luminary potentially could replace Reflector server in flavour of using offline-first Gun DB pure distributed storage system. That allows instead of ‘Reflecting’ messages with centralised Croquet’s time now, to ‘Shining’ time on every connected node using Gun’s Hypothetical Amnesia Machine, running in peer-to-peer Web. Also to secure all external messages streams by using peer-to-peer identities and SEA cryptographic library for Gun DB. More over running Luminary on AXE blockchain.

For those who are not familiar with Open Croquet architecture, just want to mark key principals behind it in simple words. 


Croquet Architecture


Croquet introduced the notion of virtual time for decentralised computations. Thinking on objects as stream of messages, which lead to deterministic computations on every connected node in decentralised network. All computations are done on every node by itself while interpreting an internal queue of messages, which are not replicated to the network. But these queues are synchronised by an external heartbeat messages coming from Reflector - a tiny server. Also any node’s self generated messages, which should be distributed to other nodes are marked as external. They are explicitly routed to the Reflector, where the are stamped with the Reflector’s time now and are returned back to the node itself and all other nodes on the network. 
Reflector is not only used for sending heartbeat messages, stamping external messages, but also it is used for holding the list of connected clients, list of running virtual world instances, bootstrapping new client connections.

Reflector 


So, in Croquet architecture for decentralised networks, the Reflector while being a very tiny or even being a micro service - it remains a server. 
It uses WebSockets for coordinating clients, world instances, providing ‘now time’ for the clients, reflecting external messages.

Let’s look how it works in Virtual World Framework (VWF). I will use the available open source code from VWF, which I am using in LiveCoding.space project by Krestianstvo.org

That’s a function returning time now by Reflector. Time is getting from a machine, running a Reflector server:
(server code from lib/reflector.js)

function GetNow( ) {
    return new Date( ).getTime( ) / 1000.0;
}

Then it uses to make a stamp for a virtual world instance:

return ( GetNow( ) - this.start_time ) * this.rate

Reflector send this time stamps using WebSockets. And on a client side VWF has a method for dispatching: 
(client code from public/vwf.js)

socket.on( "message", function( message ) {
  var fields = message;
  ….
  fields.time = Number( fields.time );
  fields.origin = "reflector";
  queue.insert( fields, !fields.action );
  ….

Look at send and respond methods, where clients use WebSocket to send external messages back to the Reflector:
   var message = JSON.stringify( fields );
   socket.send( message );

Luminary


Now, let’s look at how Krestianstvo Luminary could identically replace the Reflector server.


First of all clients are never forced using WebSockets directly from the application itself for sending or receiving messages. Instead Gun DB responds for that functionality internally. All operations which previously relay on WebSocket connection are replaced by subscribing to updates and changes on a Gun DB nodes and properties.
So, instances, clients - are just Gun DB nodes, available to all connected peers. In that scene, the required Reflector’s application logic is moving from the server to the clients. As, every client on any moment of time could get actual information about instance he is connected to, clients on that instance, etc. Just requesting a node on Gun DB.

Now, about time.

Instead of using machine’s new Date().getTime(), Krestianstvo Luminary uses state from Gun’s Hypothetical Amnesia Machine which combines timestamps, vector clocks, and a conflict resolution algorithm. So, every written property on a Gun’s node stamped with HAM. This state is identical for all peers. That’s meaning that we could get this state just on any client.
Taking in consideration that Gun DB guaranteers that, every change on every node or property will be delivered in right order to all peers. We could make a heartbeat node and subscribe peers to it updates.

Here is the code for creating a heartbeat for VWF:

Gun.chain.heartbeat = function (time, rate) {
              // our gun instance
              var gun = this;
              gun.put({
                  'start_time': 'start_time',
                  'rate': 1
              }).once(function (res) {
                  // function to start the timer
                  setInterval(function () {
                      let message = {
                          parameters: [],
                          time: 'tick'
                      };
                      gun.get('tick').put(JSON.stringify(message));
                  }, 50);
              })
 
              // return gun so we can chain other methods off of it
              return gun;
          }
Client, which start firstly or create a new virtual world instance, create heartbeat node for that instance and run a metronome (that part could be run on Gun DB instance somewhere on the hosting server, for anytime availability):

let instance = _LCSDB.get(vwf.namespace_); //
instance.get('heartbeat').put({ tick: "{}" }).heartbeat(0.0, 1);

So, every 50 ms, this client will writes to property ‘tick’ the message content, thus changing it, so Gun HAM will move the state for this property, stamping it with the new unique value, from which the Croquet time will be calculated later.
The start time will be the state value of HAM at ‘start_time’ property, of heartbeat node. Please notice, that actual Croquet timestamp is not calculated here, as it was in Reflector server. The timestamp used for the Croquet internal queue of messages will be calculated on reading of ‘tick’ by the VWF client in its main application.

Here is the simplified core version of dispatching ‘tick’ on VWF client main app, just to get the idea: (full code on public/vwf.js, links below)

let instance = _LCSDB.get(vwf.namespace_);
instance.get('heartbeat').on(function (res) {
   if(res.tick) {
  let msg = self.stamp(res, start_time, rate);
  queue.insert(fields, !fields.action);
  }
}
this.stamp = function(source, start_time, rate) {
            let message = JSON.parse(source.tick);
            message.state = Gun.state.is(source, 'tick');
            message.start_time = start_time; //Gun.state.is(source, 'start_time');
            message.rate = rate; //source.rate;
            var time = ((message.state - message.start_time)*message.rate)/1000;
            if (message.action == 'setState'){
                time = ((_app.reflector.setStateTime - message.start_time)*message.rate)/1000;
            }
            message.time = Number( time );
            message.origin = “reflector";
            return message
        }
The main point here is the calculation of Croquet time using Gun’s HAM state:

Gun.state.is ( node, property )

for message:

message.state = Gun.state.is(source, ‘tick’); // time of updating tick
message.start_time = Gun.state.is(source, ‘start_time'); //start time of the instance heartbeat
message.rate = source.rate;
var time = ((message.state - message.start_time)*message.rate)/1000;


So, all peers will calculate exactly the same Croquet time on getting an update from Gun DB,  regardless of the time when they get this update (network delays, etc).

As you could imagine, sending external messages will be as simple as just writing the message by a peer to an instance heartbeat with a new message’s content. All connected peers and a peer itself will get that message, stamped with Croquet time, while they are subscribed on changes on heartbeat node (look above at instance.get(‘heartbeat’).on() definition )

instance.get('heartbeat').get('tick').put(JSON.stringify(newMsg));

Actually that’s it!

Conclusions


  • no Reflector server needed (any running Gun DB instance on a network fits, could know nothing about VWF app and clients)
  • clients, world instances, connecting logic - holding by distributed DB and clients
  • stamping messages are doing by clients themselves using Gun’s HAM
  • one dedicated peer, producing metronome empty messages for moving time forward (could be anywhere)


All advantages that Gun DB provides, could be applicable inside a Croquet Architecture. One of scenarios could be the use of Gun’s Timegraph. That’s will allow to store and retrieve the history of messages for recording and replaying later. Using SEA Security, Encryption, & Authorization library, will allow to create a highly secure instance’s heartbeats using peer-to-peer identifies and being deployed anywhere, anytime available on AXE blockchain.

Issues


For making a fully functional prototype, there are still an issues in porting Reflector application logic to a functional-reactive Gun DB architecture nature. That concerns to the procedure of connecting clients to a running instance. As it is connected with getting/setting instance state, pending messages and then replaying them on a new connected peers. But, all that is not critical, as does not affect the main idea behind Krestianstvo Luminary.
There are performance issues, as Gun DB is using RAD storage adapter. But configuring several RAD options could be helpful, concerning opt.chunk and opt.until (due to RAD or JSON parse time for each chunk).

Source code


The raw code is available at LiveCoding.space GitHub repository under the branch ‘luminary’.

The branch ‘luminary-partial’ contains working prototype of partial Luminary, when one master-client is chosen for reflector logic, it uses Gun.state() for stamping messages, as it was done in the original reflector app, and then distribute as updates to other peers through Gun DB. 


Thanks for reading and I will be gladfull if you will share your comments and visions on that.

Nikolai Suslov

Torsten Bergmann - Observer Pattern in Pharo

Torsten Bergmann - Event Music Manager in Pharo and Seaside

Benoît Verhaeghe experimented with Pharo, Seaside and MP3 playing on Linux using LibMPEG3. Result can be seen here: https://twitter.com/badetitou/status/1139261925722902536

Code is on GitHub: https://github.com/badetitou/EMM

June 13, 2019

Pierce Ng - TIG: Telegraf InfluxDB Grafana Monitoring

I've set up the open source TIG stack to monitor the services running on these servers. TIG = Telegraf + InfluxDB + Grafana.

  • Telegraf is a server agent for collecting and reporting metrics. It comes with a large number of input, processing and output plugins. Telegraf has built-in support for Docker.

  • InfluxDB is a time series database.

  • Grafana is a feature-rich metrics dashboard supporting a variety of backends including InfluxDB.

Each of the above runs in a Docker container. Architecturally, Telegraf stores the metrics data that it collects into InfluxDB. Grafana generates visualizations from the data that it reads from InfluxDB.

Here are the CPU and memory visualizations for this blog, running on Pharo 7 within a Docker container. The data is as collected by Telegraf via querying the host's Docker engine.

Grafana Pharo CPU

Grafana Pharo Memory

Following comes to mind:

  • While Pharo is running on the server, historically I've kept its GUI running via RFBServer. I haven't had to VNC in for a long time now though. Running Pharo in true headless mode may reduce Pharo's CPU usage.

  • In terms of memory, ~10% usage by a single application is a lot on a small server. Currently this blog stores everything in memory once loaded/rendered. But with the blog's low volume, there really isn't a need to cache; all items can be read from disk and rendered on demand.

Only one way to find out - modify software, collect data, review.

June 12, 2019

Cincom Smalltalk - Cincom Smalltalk’s Carl Gundel to Speak at Seattle Meetup

Paige W. and Steve K. from Seattle Software Crafters will be hosting a meetup at CDK Global in Seattle, Washington on July 25th from 6:00 p.m. to 8:30 p.m. The […]

The post Cincom Smalltalk’s Carl Gundel to Speak at Seattle Meetup appeared first on Cincom Smalltalk.

June 11, 2019

Mariano Martinez Peck - VA Smalltalk: Remote controlling Raspberry Pis from Across the World!

You probably know that with my friends Gera and Javier we are working on a Bell tower (called Carrillon) automation using Raspberry Pi, Python and VASmalltalk.

Last week, Gera was in his way to Buenos Aires airport to take a plane to Las Vegas. Via chat, he told me he forgot his Pi: he wanted to work on our project during his free time on the trip. Any normal person would have just order another one on Amazon or wait until home.

But it was Gera. So he asked me “Could you give me SSH access to your Pi?”. Sure, I have 3 or 4 Raspberries around and with my Eero router it’s quite easy to do port forwarding. I also had No-IP hostname and the client app refreshing the IP on my Mac. So… it was pretty easy for me to give him all those tools: external IP, SSH access and my running Pi.

In this post, I am writing down everything he figured out (and shared with me) to remotely access Raspberry Pi GPIOs. It’s very important to note that “remote” doesn’t mean “Las Vegas”. It’s even very useful to access your Raspberry Pi GPIOs from your Linux development machine.

Finally, as a prerequisite of this post, I recommend you reading first a previous one that explains the basis of pigpio and VASmalltalk.

Setting up the Raspberry Pi for remote GPIO

The first thing is to tell Raspbian that you allow remote GPIO. You can do this from command line with `sudo raspi-config` or with the user interface as shown below (look at the bottom):

The last thing is that you must explicitly start the ‘pigpiod’ daemon on a given port. For example, ‘sudo pigpiod 8888’. If you want to avoid this you would need to change that and start the daemon on startup with something like ‘sudo systemctl enable pigpiod’. For this post, we started it by hand as showm above.

Setting up your Linux client machine

The client machine must also have (like the Pi does) the ‘pigpio’ library installed. Depending on which Linux distro you are, it could be a simple ‘sudo apt-get install pigpiod’ kind of command. If the package is not there, then you can compile from source as explained here. You can run ‘sudo pigpiod -v’ to verify the library was correctly installed.

Now, on the abt.ini file used by VASmalltalk, under the section ‘[PlatformLibrary Name Mappings]’ you must add these lines:

RaspberryGpio=libpigpio.so
RaspberryGpioDaemon=libpigpiod_if2.so
RaspberryGpioUltrasonicDaemon=libpigpioultrasonic.so

Be sure those are correct and “findable” by the OS.

Connecting the client Linux to the remote Pi

To connect from the Linux machine to the Pi machine, you basically need to know it’s IP and have a connection to it. If both are in the same internal network (and there is no firewall or something in the middle blocking that port), then you can probably do the following in Smalltalk:

RaspberryGpioDaemonInterface defaultIP: 'XXX.XXX.XXX.XXX' andPort: '8888'.

Where ‘XXX.XXX.XXX.XXX’ is the internal IP of your Pi.

If you want to connect from outside your local network and you want to avoid opening the 8888 port, you can do a SSH tunnel:

ssh -v -t -YC -p NNNN pi@'YYY.YYY.YYY.YYYY' -L 8888:127.0.0.1:8888

Or:

ssh -v -t -YC -p NNNN pi@'your.hostname.dns.whatever' -L 8888:127.0.0.1:8888

To confirm the tunnel is working, you can run ‘nc -v 0 8888’ on your client Linux. You should see a success message and then see some debugging info in the SSH console where you opened the tunnel.

Once the tunnel has been stablished you can now connect to the Pi from Smalltalk like this:

RaspberryGpioDaemonInterface defaultIP: '127.0.0.1' andPort: '8888'.

See it live

Below is a video showing how all this worked for real between Gera and me:

Full Smalltalk example

Below is a whole example of connecting a client to a remote Raspberry Pi and accessing the GPIO pins of a IO expander I2C MCP23017 (we will talk about this on a future post) using SSH tunnel:

Conclusion

Being able to directly manipulate the GPIOs of your Raspberry Pi from a development machine or from anywhere in the world, is a tremendously useful feature. Thanks Gera for showing me how to do it!

David A. Smith - Croquet Multi-user Demonstration

June 10, 2019

Pharo Weekly - [ann ] Dr. Geo release 19.06

I am pleased to announce the Dr. Geo release 19.06, the GNU interactive
geometry software. It follows the release 19.03 in March 2019.

– New features
– Bugs fix
– Updated French user guide <http://drgeo.eu/help>
– New book “Programmer des math avec Dr. Geo” <http://drgeo.eu/help>. WIP – Feedback appreciated!

See details in the change log file in the software or read the bugs fix
list <https://launchpad.net/drgeo/trunk/19.06>.

Download <http://drgeo.eu/download>.

Hilaire


Dr. Geo
http://drgeo.eu

June 07, 2019

Pharo Weekly - [ann] IPFS for Pharo

Hi everyone,

Over the last weeks I have started to explore IPFS more seriously.
IPFS, the Inter-Planetary File System, is supposed to be the
next-generation Web: a decentralized content-addressed database.

Since there is nothing better then Pharo for exploring databases,
I have started to write an IPFS interface to Pharo:

https://github.com/khinsen/ipfs-pharo

It connects to a local IPFS server, so you have to have one
running. It’s surprisingly straightforward to install and configure,
unless you have to fight with firewalls that block IPFS traffic.

Feedback of any kind is welcome!

Cheers,
Konrad.

Mariano Martinez Peck - Beginners guide to GPIO in VASmalltalk

As I commented in an earlier post, one of the great features VASmalltalk has in the context of IoT, is a wrapper of the C library pigpio which allows us to manage GPIOs as well as their associated protocols like 1-Wire, I2C, etc.

In this post we will see the basic setup for getting this to work.

Setting up pigpio library

The first step is to install the pigpio C library. The easiest way is to install it with a packager manager if the library is available:

sudo apt-get update
sudo apt-get install pigpio

If you want a different version than the one installed by the packager manager or the package is not available, then you can compile it yourself:

rm master.zip
sudo rm -rf pigpio-master
wget https://github.com/joan2937/pigpio/archive/master.zip
unzip master.zip
cd pigpio-master
make
sudo make install

To verify the library is installed correctly, you can execute “pigpiod -v” and that should print the installed version.

Setting up VASmalltalk

In this post I showed how to get a VASmalltalk ECAP release with ARM support. The instructions are as easy as uncompress the download zip into a desired folder.

The next step is to edit “/raspberryPi/abt32.ini”:

to include the following fields:

RaspberryGpio=libpigpio.so
RaspberryGpioDaemon=libpigpiod_if2.so
RaspberryGpioUltrasonicDaemon=libpigpioultrasonic.so

under the section “[PlatformLibrary Name Mappings]”.

Then, fire up the VASmalltalk image by doing:

cd raspberryPi/
./abt32.sh

Once inside VASmalltalk, go to “Tools” -> “Load/Unload Features…” and load the feature “VA: VAStGoodies.com Tools”.

Then “Tools” -> “Browse Configurations Maps”, right click on the left pane (list of maps) and select “Import from VAStGoodies.com”. Now load “RaspberryHardwareInterfaceCore” and “RaspberryHardwareInterfaceTest”

You are done! You have installed pigpio library and the VASmalltalk wrapper. Let’s use it!

Some GPIO utilities to help you started

Before starting with VASmalltalk, let me show you some Linux utilities that are very useful when you are working with GPIO.

One of the commands is “pinout” which comes with Raspbian. It shows you everything you need to know about your Pi!! Hardware information as well as a layout of the pins:

And yes, do believe the command line output and visit https://pinout.xyz Its’ tremendously useful. A must have.

The other tool is ‘gpio’. This allows you to see the status of every pin and even pull up / down them right from there. Example below shows how to read all pins and then pull up BCM pin 17 (physical pin 11).

I don’t want to enter into the details in this post, but as you can see, each pin could have 3 different numbers: physical (the number on board), BCM and wPi (wiring Pi). And they also have a name. So…whenever you are connecting something you must be sure which “mode” they refer too. The number alone is not enough.

Managing GPIOs from Smalltalk!

In this post, we will see the most basic scenario of a GPIO which basically means pulling it up or down. When up, it outputs 3.3v (in the case of a Raspberry Pi), when down, 0. This is enough for you to play with LEDs, fans, and anything that doesn’t require “data” but just voltage.

The Smalltalk code for doing that is:

In the comment of above snippet you can see how you can validate that it actually worked… You can use a volt meter or use ‘gpio readall’ to confirm the change. For the volt meter, set it in 10 / 20 volt (DCV) range. Then with the negative cable (usually black) touch any ground pin (for example, physical pin 6) and with the positive cable (usually red) touch GPIO pin 17 (physical pin 11). When the GPIO is on, then you should see the meter register about 3.3 volts and 0 when off. Welcome to the hardware debugger!

Acknowledgments

The pigpio wrapper (RaspberryHardwareInterface) was a community pushed project. I think Tim Rowledge started with it in Squeak Smalltalk, then Louis LaBrunda started a port to VASmalltalk, and finally Instantiations helped to get that port finished up and running.

Conclusion

In this post you saw how to install pigpio C library, how to install the wrapper in VASmalltalk and see one basic example. In future posts we will see more advanced GPIO uses and protocols.

Pharo Weekly - Blog post: About singleton

June 05, 2019

Mariano Martinez Peck - Getting started with VASmalltalk, Raspberry Pi and other devices

In the previous post, I described why I personally believe that Smalltalk is a good fit for IoT. When we talk about IoT, there are millions of topics we can touch: single board computers, sensors, security, protocols, edge computing, GPIOs, AI, and so on…   But one device in particular that changed the world is the Raspberry Pi.

You can read in many websites why the Raspberry Pi was a game changer. But what matters now is that it’s a single board computer that fits in your hand, that could have 4 CPU cores (ARM), 1GB RAM, HDMI, 4 USBs, video, audio, Wi-Fi, Bluetooth, 40 GPIOs and all that in 35USD. Such device has power enough to perfectly run Linux. You can even run Docker containers in there!!

Obviously, by this time, Raspberry is not alone and there are plenty of devices out there as well:  Pine64, BananaPi, ODROID-N2, just to name a few. There are new devices every day. They usually focus on different things:  being cheap, AI, desktop replacement, being specially designed for clustering, etc.

However, most of them provide similar features:

  • ARM / ARM 64 processors (only a few others may have Intel or RISC-V).
  • Run a few given Linux distros either 32 or 64 bits (although some devices would also work with Windows 10 IoT).
  • Provide a number of GPIOs (most of the times, 40 pins) for general input output. Usually between 3.3v and 5.v. Not all boards provide GPIOs.
  • Implement some protocols to use over the GPIOs (UART, I2C, 1-Wire, SPI, etc).

What does Smalltalk (or any other language) need to run and take advantage of these devices?

The first obvious thing is that the Virtual Machine (VM) of the language must be compiled for ARM / ARM 64 (not x86 / x64).  And it’s not just the VM but every third party library the VM depends on. Also, the VM may have some assumptions on the CPU architecture, like some Just In Time compilers that would only work in x86.

The second obvious thing is that such VM should work in Linux and most likely in any of the most commonly available distros: Raspbian, Armbian, Ubuntu, Debian, etc. This Linux must be a ARM-compiled version of the distro.

Finally, it would be almost mandatory that the language (Smalltalk in this case) has some functionality to be able to use the GPIOs. This is super important for sensors and all kind of devices that could be connected to the single board computer. A common approach for high level languages like Smalltalk is to wrap via FFI (Foreign Function Interface) a third party C library that already takes care of that and all the related protocols (1-Wire, I2C, SPI, UART, etc). There are many libraries out there: pigpio, wiringpi, etc.

What’s the particular architecture of VASmalltalk?

The new VASmalltalk VM (starting with VA 9.1) was designed from scratch to be portable, performant and maintainable.

With that in mind, the VM was built using LLVM and libffi. LLVM supports many different target architectures, so VASmalltalk could eventually target RISC_V, WebAssembly or any other of their supported CPU arch, with relative small effort.

In fact, it took us only a small amount of hours to have the VM compiled to ARM and a few others to compile it for ARM 64 (#aarch64).

For the Linux dependency itself, the VASmalltalk VM only depends on glibc (and if headfull, on motif), which is available on almost every Linux. So it’s quite likely it would work out of the box in any Linux. It also supports both, ARM and ARM 64 so it should work either bitness of Linux.

Finally, for the GPIO, VASmalltalk have a FFI binding for pigpio which allows you sensor data, pull pins up and down and implements most of the protocols already mentioned.

Show it to me!!

The procedure explained below on how to get started with VASmalltalk, Linux and ARM is quite similar for all devices and Linux flavors. For the purpose of this post I need to pick one, so I take the most common: Raspberry Pi (which could be any of the models) + Raspbian. The latter is a custom Debian OS specially made for the Raspberry Pi and the most commonly used and accepted.

At the time of this writing, the ARM version of VASmalltalk is only available through ECAP (Early Customer Access Program) releases. An ECAP release comes as a compresses file that you just uncompress wherever you want and everything is self contained. On Windows there is no further step to do than that. On Linux, you must be sure that the dependencies are installed. Note that his only happens with ECAP releases as VASmalltalk does provide regular .deb and .rpm installers for stable releases and those do install dependencies automatically.

Below is a script I normally use on a fresh Raspbian. Not all of the script is mandatory….just some of it:

#!/bin/sh

# Remove some stuff I normally don't use to free resources
sudo apt-get remove --purge --assume-yes \
  scratch* \
  libreoffice* \
  wolfram-engine \
  sonic-pi \
  minecraft-pi

# Lets update before installing
sudo apt-get update

# Basic tooling I like to have an all my Linuxes
sudo apt-get install --assume-yes \
  vim \
  tmux \
  htop \
  iotop \
  autocutsel \
  avahi-daemon \
  i2c-tools \
  netatalk \
  libnss-mdns \
  xrdp \
  curl

# Fix VNC issue https://www.raspberrypi.org/forums/viewtopic.php?f=28&t=230779&p=1413648#p1413648
sudo apt-get install --assume-yes haveged
sudo update-rc.d haveged defaults


# Install pigpio (needed to use GPIO from VASmalltalk)
sudo apt-get install --assume-yes \
  pigpio \
  python-pigpio \
  python3-pigpio
cd $HOME
wget https://raw.githubusercontent.com/joan2937/pigpio/master/util/pigpiod.service
sudo cp pigpiod.service /etc/systemd/system/
sudo systemctl enable pigpiod
rm pigpiod.service

# Install RPI Monitor --- useful tool for monitoring status of the Pi
sudo apt-get install apt-transport-https ca-certificates dirmngr
sudo apt-key adv --recv-keys --keyserver keyserver.ubuntu.com 2C0D3C0F
sudo wget http://goo.gl/vewCLL -O /etc/apt/sources.list.d/rpimonitor.list
sudo apt-get update
sudo apt-get install rpimonitor

# Install VA Dependencies for running headless (esnx)
sudo apt-get install --assume-yes --no-install-recommends \
  libc6 \
  locales \

# Install VA Dependencies for running headfull and VA Environments tool
sudo apt-get install --assume-yes --no-install-recommends \
  libc6 \
  locales \
  xterm \
  libxm4 \
  xfonts-base \
  xfonts-75dpi \
  xfonts-100dpi

# Only necessary if we are using OpenSSL from Smalltalk
sudo apt-get install --assume-yes --no-install-recommends \
  libssl-dev 

# Generate locales
sudo su
echo en_US.ISO-8859-1 ISO-8859-1 >> /etc/locale.gen
echo en_US.ISO-8859-15 ISO-8859-15 >> /etc/locale.gen
locale-gen
exit

# Cleanup
sudo apt-get clean
sudo apt-get autoremove

That bash script should be pretty clear on what I am doing and why. And as you can see, it is quite easy to adapt to other distros too. You can also see how to make a Docker image for VASmallalk and ARM here.

Once you have uncompressed the ECAP, all you need to do is go that directory, under ‘/raspberryPi/’ and run ‘./abt32.sh’. That will bring the full VASmalltalk Unix IDE:

Screen Shot 2019-05-25 at 4.15.16 PM.png

Tips, tricks and recommendations to get started with Raspberry Pi

Below are just some recommendations if this is the first time you give Raspberry Pi a try:

  • Buy one of those kits (like Canakit or similar) that already comes with power supply, an already formatted microSD card and with NOOBS installed. NOOBS is a software that at boot (on the Pi), allows you to pick up an Operating System and install it in that very same running SD.
  • If you got the SD from somewhere else, I recommend using “SD Card Formatter” software (or similar) to be sure the SD is properly formatted for NOOBS. Then, download and put NOOBS in that SD, boot the Pi and finally install Raspbian.
  • I recommend starting with Raspbian Linux (default choice with NOOBS).
  • If you want to buy the SD separated I recommend, at least, UHS-1 Class 10 type.
  • Be sure to have a micro SD to SD adaptor or a USB card reader able to read micro SD.

Finally, if you want to know my top 10 gadgets for IoT, see below tweet. Although, I must say I have already new gadgets that should be added there!!

Conclusion

This post should give you an idea on how to get started with VASmalltalk on a ARM-Linux-powered single board computer. I have personally tried a few devices (Pi Zero W, Pi 3B+, Rock64) and many OSs (Raspbian, Raspbian Lite, Ubuntu Server 18.04, Armbian, etc). The procedure is quite similar with all of them.

In the upcoming posts, I will be showing how to access the GPIO from VASmalltalk,  how to assembly and wrap specific sensors, how to transparently persist data on a remote GemStone, unique debugging capabilities, and much more.

Stay tuned!

June 02, 2019

Smalltalk Jobs - Smalltalk Jobs – 6/2/19

  • Miami, FLSmalltalk Developer through Kforce
    • Required Skills:
      • Smalltalk
    • Wanted Skills:
      • Relevant education and/or training will be considered a plus
      • Some Java and C++ makes it quicker to learn Small Talk
  • Guadalajara, MexicoApplication Developer at IBM
    • Required Skills:
      • Any experience in an OO language like C++/Eiffel/Java/etc.
      • Candidate should also be willing to learn and work on Smalltalk as his day to day job, and then learn new technologies such as Javascript/Angular/Node JS for future new development.
      • Fluent English.
      • Candidate should be able to work in USA business hours.
    • Wanted Skills:
      • Smalltalk
Good luck with your job hunting,
James T. Savidge

View James T. Savidge's profile on LinkedIn

This blog’s RSS Feed

Torsten Bergmann - PlantUML access from Pharo

May 28, 2019

ESUG news - ESUG19: Call for Student Volunteers

Student volunteers help keep the conference running smoothly; in return, they have free accommodations, while still having most of the time to enjoy the conference.

Pay attention: the places are limited so do not wait till the last minute to apply.

More information: https://esug.github.io/2019-Conference/callForStudents2019.html

May 27, 2019

Stefan Marr - Generating an Artifact From a Benchmarking Setup as Part of CI

Disclaimer: The artifact, for which I put this automation together, was rejected. I take this as a reminder that the technical bits still require good documentation to be useful.

In the programming language community, as well as in other research communities, we strive to follow scientific principles. One of them is that others should be able to verify the results that we report. One way of enabling verification of our results is by making all, or at least most elements of our systems available. Such an artifact can then be used for instance to rerun benchmarks, experiment with the system, or even build on top of it, and solve entirely different research questions.

Unfortunately, it can be time consuming to put such artifacts together. And, the way we do artifact evaluation does not necessarily help with it: You submit your research paper, and then at some later point get to know whether it is accepted or not. And only if it is accepted, we start preparing the artifacts. Because it can be a lot of work and the deadlines are tight, the result may be less then perfect, which is rather unfortunate.

So, how can we reduce the time it takes to create artifacts?

1. More Automation

For a long time now, I have worked on gradually automating the various elements in my benchmarking and paper writing setup. It started early on with ReBench (docs), a tool to define benchmarking experiments. The goal was to enable others and myself to reexecute experiments with the same parameters and build setup. However, in the context of an artifact, this is only one element.

Perhaps more importantly, with an artifact we want to ensure that others do not run into any kind of issues during the setup of the experiments, avoiding problems with unavailable software dependencies, version conflicts, and the usual mess of our software ecosystems.

One way of going about avoiding these issues is to setup the whole experiment in a systems virtual machine. This means, all software dependencies are included and someone using the artifact will need only the software that can execute the virtual machine image.

VirtualBox is one popular open source solution for these kind of systems virtual machines. Unfortunately, setting up a virtual machine for an artifact is time consuming.

Let’s see how we can automate it.

2. Making Artifacts Part of Continuous Integration

2.1 Packer: Creating a VirtualBox with a Script

Initially, I started using Vagrant, which allows us to script the “provisioning” of virtual machine images. This means, we can use it to install the necessary software for our benchmarking setup in a VirtualBox image. Vagrant also supports systems such as Docker and VMWare, but I’ll stick to VirtualBox for now.

Unfortunately, my attempt of using Vagrant was less than successful. While I was able to generate an image with all the software needed for my benchmarks, when testing the image, it would not correctly boot. Might have been me, or some fluke with the Vagrant VirtualBox image repository.

Inspired by a similar post on creating artifacts, I looked into packer.io, which allows us to create a full VirtualBox image from scratch. Thus, we have full control of what ends up in an artifact, and script the process in a way that can be run as part of CI. Having a fully automated setup, I can create an artifact on our local GitLab CI Runner, either as part of the normal CI process or perhaps weekly, because it takes about 2h to build a VM image.

As a small optimization, I split the creation of the image into two steps. The first step creates a base image with a minimal Lubuntu installation, which can be used as a common base for different artifacts. The second step creates the concrete artifact by executing shell scripts inside the VM, which install dependencies and build all experiments so that the VM image is ready for development or benchmarking.

2.2 Fully Automated Benchmarking as Stepping Stone

Before going into setting up the VM image, we need some basics.

My general setup relies on two elements: a GitLab CI runner, and an automated benchmarking setup.

The fully automated benchmarking setup is useful in its own right. We have used it successfully for many of our research projects. It executes a set of benchmarks for every change pushed to our repository.

Since I am using ReBench for this, running benchmarks on the CI system is nothing more than executing the already configured set of benchmarks:

rebench benchmark.conf ci-benchmarks

For convenience, the results are reported to a Codespeed instance, where one can see the impact of any changes on performance.

Since ReBench also builds the experiments before running, we are already half way to a useful artifact.

2.3 Putting the Artifact Together

Since we could take any existing VirtualBox image as a starting point, let’s start with preparing the artifact, before looking at how I create my base image.

In my CI setup, creating the VirtualBox image boils down to:

packer build artifact.json
mv artifact/ ~/artifacts  # to make the artifact accessible

The key here is of course the artifact.json file, which describes where the base image is, and what to do with it to turn it into the artifact.

The following is an abbreviated version of what I am using to create an artifact for SOMns:

"builders" : [ {
  "type": "virtualbox-ovf",
  "format": "ova",
  "source_path": "base-image.ova",
  ...
} ],
"provisioners": [ {
  "execute_command":
    "echo 'artifact' | {{.Vars}} sudo -S -E bash -eux '{{.Path}}'",
  "scripts": [
    "provision.sh"
  ],
  "type": "shell"
}]

In the actual file, there is a bit more going on, but the key idea is that we take an existing VirtualBox image, boot it, and run a number of shell scripts in it.

These shell scripts do the main work. For a typical paper of mine, they would roughly:

  1. configure package repositories, e.g. Node.js and R
  2. install packages, e.g., Python, Java, LaTeX
  3. checkout the latest version of the experiment repo
  4. run ReBench to build everything and execute a benchmark to see that it works. I do this with rebench --setup-only benchmark.conf
  5. copy the evaluation parts of the paper repository into the VM
  6. build the evaluation plots of the paper with KnitR, R, and LaTeX
  7. link the useful directories, README files, and others on the desktop
  8. and for the final touch, set a project specific background file.

A partial script looks perhaps something like the following:

wget -O- https://deb.nodesource.com/setup_8.x | bash -

apt-get update
apt-get install -y openjdk-8-jdk openjdk-8-source \
                   python-pip ant nodejs

pip install git+https://github.com/smarr/ReBench

git clone ${GIT_REPO} ${REPO_NAME}

cd ${REPO_NAME}
git checkout ${COMMIT_SHA}
git submodule update --init --recursive
rebench --setup-only ${REBENCH_CONF} SOMns

It configures the Node.js apt repositories and then installs the dependencies. Afterwards, it clones the project, checks it out, and runs the benchmarks. There are a few more things to be done, as can be seen for instance with the SOMns artifact.

This gives us a basic artifact that can be rebuilt whenever needed. It can of course also be adapted to fit new projects easily.

The overall image is usually between 4-6GB in size, and the build process, including the minimal benchmark run takes about 2h. Afterwards, we have a tested artifact.

What remains is writing a good introduction and overview, so that others may use it, verify the results, and may even be able to regenerate the plots in the paper with their own data.

3. Creating a Base Image

As mentioned before, we can use any VirtualBox image as a base image. We might already have one from previous artifacts, and now simply want to increase automation, or we use one of the images offered by the community. We can also build one specifically for our purpose.

For artifacts size matters. Having huge VM images makes downloads slow, storage difficult, and requires users to have sufficient free disk space. Therefore, we may want to ensure that the image only contains what we need.

With packer, we can automate the image creation including the initial installation of the operating system, which gives us the freedom we need. The packer community provides various examples that are a useful foundation for custom images. Inspired by bento and an Idris artifact, I put together scripts for my own base images. These script download a Ubuntu server installation disk, create a VirtualBox, and then start installation. An example configuration is artifact-base-1604.json, which creates a Lubuntu 16.04 core VM. The configuration sets various details including memory size, number of cores, hostname, username, password, etc. Perhaps worthwhile to highlight are the following two settings:

"hard_drive_nonrotational": true,
"hard_drive_discard": true,

This instructions VirtualBox to create the hard drive as an SSD. This hopefully ensures that the disk only uses the actual required space, and therefore minimizes the size of our artifacts. Though, I am not entirely sure this is without drawbacks. But so far, it seems that disk zeroing and other tricks used to reduce the size of VM images is not necessary with these settings.

In addition to the artifact-base-1604.json file, the preseed.cfg instructs the Ubuntu installer to configure the system, installs useful packages such as an SSH server, a minimal Lubuntu core systems, Firefox, a PDF viewer, and a few other things. After these are successfully installed, the basic-setup.sh configures the system to disable automatic updates, configure the SSH server for use in a VM, enable password-less sudo, and install the VirtualBox guest support.

The result is packaged up as a *.ova file which can be directly loaded by VirtualBox and becomes the base image for my artifacts.

4. What’s Next?

With this setup, we automate the recurring and time consuming tasks of creating VirtualBox images that contain our artifacts. In my case, such artifacts contain the sources, benchmarking infrastructure, as well as the scripts to recreate all numbers and plots in the paper.

That means, the artifact misses documentation of how the pieces can be used, how SOMns is implemented, and what one would need to do to change things. Thus, for the next artifact, I hope having this automation will allow me to focus on writing better documentation instead of putting together all the bits and pieces manually. For new projects, I can hopefully reuse this infrastructure, and get the artifact created by the CI server from day 1. Whether it actually works, I’ll hopefully see soon.

Docker would be also worth looking into as a more lightweight alternative to VirtualBox. Last year I asked academic Twitter, and containers seemed by a small margin the desired solution. Ideally, most of the scripts can be used, and just be executed in a suitable Docker container. Though, I still haven’t tried it.

Acknowledgements

I’d like to thank Richard and Guido for comments on a draft.

May 24, 2019

Pharo Weekly - [Consortium] Lifeware and Schmidt Pro

The Pharo consortium is very excited and super happy to bring your attention to the following announce
that was presented during Pharodays 2019: https://fr.slideshare.net/pharoproject/

The consortium got two contracts to support financially one year of engineer to improve Pharo.
The companies Lifeware and Schmidt Pro fund work on Pharo to improve Pharo. The total amount
of the two contracts is around 190 K Euros.
In addition the RMOD team got some resources from Inria.
The net result is that in 2019 the consortium will have 3.5 engineers working full time on Pharo.
– Esteban Lorenzano
– Pablo Tesone
– Cyril Ferlicot
– Guillermo Polito
It will boost Pharo. Note that the issues raised by Schmidt Pro and Lifeware
and their impacts on the roadmap of Pharo 8 and Pharo 9 are presented in https://fr.slideshare.net/pharoproject/.
What is key to notice is that the consortium is working because many contributing companies are sharing ressources.
This is built a strong soil to grow Pharo and business around.
On the behalf of Inria, the consortium and the community we would like to thank 

Lifeware and SchmidtPro for their strong support.

May 23, 2019

Pharo Weekly - Tech Talks restarted

Hi,

Last year we had some “Pharo Tech Talks”… we want to start that again.

Dates:

June 20
July 18
Sept 19
Oct 17
Nov 21
Dec 12

The time would be 17h local time (Berlin/Paris).

-> If you want to “drive” one of those dates —> send me a mail.
-> Dates are flexible, if you want to do a tech talk you can propose another date, too
-> For the dates without special talk, I think I will do a “lets fix something small” session as the default.
(with screen sharing, while the people on Discord help and discuss how to do it)

What can it be?
-> you could present your project
-> you could give a “lecture”
-> you could do some tutorial
-> You could moderate a audio chat around a topic

If someone wants to do something, send me a mail. I will for now already add these dates to the events list at
https://association.pharo.org/events

Marcus

May 22, 2019

Cincom Smalltalk - Smalltalk Digest: May Edition

In this edition, we will detail the birth of Cincom Smalltalk, remind readers about the new releases of Cincom ObjectStudio 8.9.2 and Cincom VisualWorks 8.3.2 and debut our 50th Hidden Gems screencast with a bonus.

The post Smalltalk Digest: May Edition appeared first on Cincom Smalltalk.

ESUG news - ESUG'19 registration is open

We are glad to announce that the ESUG'19 registration is open.

You will find all infos about this year conference here: https://esug.github.io/2019-Conference/conf2019.html

Note that:

  • we increased some prices a little compared to the same prices we had for more than 10 years:
    • Early Registration Fee: 500€ (all days) / 180€ (per day)
    • The early registration deadline is: July 25th 2019
    • Late Registration Fee: 660€ (all days) / 210€ (per day)
  • you can now directly add an extra person to social dinner while registering (+60€)
  • we introduced specific fees for payment methods:
    • Payment by credit card (paypal): +6% fees
    • Payment by bank transfer: free of charge

May 21, 2019

ESUG news - IWST19: Call for Presentations

IWST19 — International Workshop on Smalltalk Technologies Cologne, Germany; August 27-29th, 2019

Full CfP at: https://esug.github.io/2019-Conference/cfpIWST2019.html

ESUG news - ESUG 2019 - Call for Presentations

The ESUG board is pleased to announce that the 27th ESUG conference/summer-school will be held in Cologne, Germany 26-30 August 2019; with Camp Smalltalk 24-25 August 2019. The conference is co-organized by ZWEIDENKER.

More information at https://esug.github.io/2019-Conference/call2019.html

Mariano Martinez Peck - Why is Smalltalk a good fit for IoT and edge computing?

I was gonna start a series of posts about Smalltalk, IoT, Edge Computing, Raspberry Pi, etc. But before that, I would like to answer a question that I am asked each time I present something related with these topics. Why is Smalltalk a good fit for IoT and edge computing? Does it have unique features over other languages? How would IoT benefit from Smalltalk? 

In this post, I try to write my personal opinion about it.

Edge computing and hardware improvements

In the first stages of IoT, most of the devices were very hardware-limited and would act more like a PLC or similar.  There wasn’t a “real” Operating System running on the device. Instead, you would be given a language to code something for that target, compile and run it. Running Linux or Smalltalk was not a possibility back then in, what was, a very small and cheap device.

Back then, most of the “code” was just retrieving sensor data and then performing an action: open the garage door when I do X, start air conditioner if temperature is higher than Y, video record my backyard, or making my drink is always at the correct temperature. Many of those initial projects were just for fun, learning, teaching, playing, etc.

But things have changed in the recent years with the explosion of IoT, AI, ARM processors, Raspberry, and other pieces of the whole picture. One the biggest game changers I can think of, is the ability to run Linux. I think that was a before/after situation. Hardware getting better, cheaper and smaller at the same time.

Today, you can have a machine of 4 cores (> 1GH each core), 4 GB RAM, eMMC/microSD, HDMI, USB 3.0 and GPU that fits in your hand and costs less than 50USD. These machines (Raspberry Pi 3+, Pine64, BananaPi, ODROID-N2, and others) are so good that there are are even clusters specially made from them. See PicoCluster and MiniNodes just as an example. Or my own 2-nodes cluster 😉

Anyway, the point is that, given the current hardware and software status, the original “code” that was just getting sensor data and pass it to a server started to become more and more complex. Some “computation” has been moving from the server into the “edge device”. Just just like server side rendering is moving to fat JS clients 🙂 So suddenly, you can now be running really complex business logic in a small device.

Not only is the “code” becoming real domain apps, but these small devices are also starting to be used in industry. One easy reason is to save costs. See this Sony example.

And there’s where I believe Smalltalk is a good fit.

Why Smalltalk could be a good fit for IoT?

  • True OOP: at the very least, Smalltalk would have all the benefits it also has when running on a regular computer, a server or the cloud: objects all the way down, efficiency, simplicity, live environment, dynamic typing, as well as many others. It’s not my intention here to list why Smalltalk is good or bad. There is plenty of material around that in books and webs.
  • Easy deployment: If you have already deployed Smalltalk systems, you know how much easier it is to do it compared to other languages. Yes, another great feature of the “image”.
  • Incredible debugging experience: why write a plain string stacktrace if you can actually dump the real stack into a binary file and materialize it later for debugging? If you are familiar with Smalltalk, you probably read about that here or here. If you are not, then that will crash your head. Yes, you understood correct: on the deployed system you serialize the whole stack (with its variables) at the moment the exception happens into a file. Later, on whatever machine, take that file and materialize it to get back a debugger with the original stack!
  • Remote debugging: wait… dumping a living stack into a file for further analysis is not enough? Ok, you can have LIVE remote debugger. That is, an exception is raised in your small device, and you can get a debugger open in the development environment of your laptop. And yes, you can change code, save it, and resume the exception (which is running on the device!!!).

  • Scalability and availability: a Smalltalk image makes it easier to deploy a system. But, to scale horizontally or provide availability you still need to do quite sysadmin work. However, Smalltalk plays really well with state of the art tools like Docker (see my previous posts Part1, Part2 and Part3) and Kubernetes.
  • Maturity: who can give you 40 years of history and continued improvements? The fact that it’s old doesn’t necessary make it outdated.
  • GPIO accessing: Smalltalk has bindings for a few of the popular GPIO accessing libs like wiringpi or pigpio. That means that from Smalltalk itself you can manage the GPIO pins as well as the known protocols like 1-Wire, I2C, SPI, etc.
  • Uniformity: imagine that you can use the same language, IDE and tools for the device that sensors data (GPIO access), for the device doing “edge computing” (may be same same as the former) and for server-side logic?
  • Transparent object oriented persistency: imagine if from that small device that runs your business logic you can transparently persist objects on an object database running on a remote cloud?

Conclusion

Am I saying that Smalltalk is the best system for IoT? No. It obviously has drawbacks. Exactly the same thing for Python. There is no silver bullet. But what I want to say is that I think Smalltalk has enough unique features to be, at the very least, considered as a serious alternative when doing IoT.

There have been a few new areas over the last 40 years like web and mobile apps. Now we have IoT.  Is this the computer revolution our Smalltalk father Alan Kay has been anticipating? I don’t know. But to me, it looks revolutionary enough to be prepared.

 

 

 

May 20, 2019

David A. Smith - Croquet Lives Again

We have been working hard on the latest, greatest version of Croquet. Next week we will be rolling it out for our friends to play with. 


David A. Smith

Twitter: @Croquet
Skype: inventthefuture

I am a part of all that I have met; 
Yet all experience is an arch wherethro' 
Gleams that untravell'd world whose margin fades 
For ever and forever when I move. 

May 18, 2019

Smalltalk Jobs - Smalltalk Jobs – 5/18/19

If you have at least 3 years of VisualWorks experience and can speak and write German, plus would love to live in the beautiful city of Cologne then this could be an excellent job.

I’m going to guess that you need to be able to work in Germany as well.

May 16, 2019

Smalltalk Jobs - Smalltalk Engineer, Edinburgh, UK

The work is on a VisualWorks system used by a semiconductor/MEMS company.  The worksite is the Scottish Microelectronics Centre in Edinburgh, with occasional visits to nearby Livingstone and 10-20% of a typical year visiting customer sites in Europe, Asia and North America.  The engineer will work with the existing small team to develop and maintain the system, including programming hardware interfaces. They should have OO experience, and some Computer hardware and computer networking knowledge.  LabVIEW programming experience, Smalltalk Programming experience and .Net experience are desired but not essential.

Pharo Weekly - [GSoC] Student Introductions

Google Summer of Code 2019 has officially started with its community bonding period. Pharo Consortium has accepted 7 students from 4 different countries to participate in this year’s GSoC.

Every week our students will publish blog posts to announce and document their progress. In this first week, we asked students to introduce themselves, tell us a little about their background and the projects they will be working on.

These are the blog posts where our students introduce themselves to the community. Feel free to contact them personally with questions, feedback, or just words of encouragement.

  1. Atharva Khare, GSoC 2019: Extending DataFrame library for Pharo Consortium
  2. Nikhil Pinnaparaju, My Journey Into Google Summer of Code — 2019
  3. Dayne Lorena Guerra Calle, GSoC 2019 introducing my project: Next Generation of Unit Testing
  4. Evelyn Cusi Lopez, Better and more refactorings for Pharo — Part 1
  5. Myroslava Romaniuk, Improving Code Completion @ GSoC 2019: introduction
  6. Nina Medic, GSoC Project
  7. Smiljana Knezev, New Collections for Pharo