top of page

If you want to constantly improve your skills as a Maker, you should read this now...

Starting in only a couple of days (December 7th-13th), our friend Peter Dalmaris is hosting a free (limited time only) online event packed with incredible presentations by Makers, for Makers.

Whether you’re a beginner or have been a Maker for a long time, you’ll find tons of great ideas and actionable info from world-class Makers such as Simon Monk, John Teel, Jason Long, Alain Pannetrat, Richard Kolk, Karsten Schulz and many more.

Here’s just a small sample of what you’ll learn during the summit sessions:

  • How to create graphics and animation using an Arduino and an LCD display

  • How to create almost anything with a laser cutter and 3D printer

  • How to re-shape society through Making

  • How to contribute to your favourite open-source project, even if you are not a programmer.

  • How to take your electronics prototype to market.

  • How to use the BBC Micro:bit in your next project.

  • How to design (and build) your own microprocessor.

  • How to design control algorithms using model-based design methodologies.

  • How to get started with embedded system design.

  • How to use JSON in your Arduino IoT projects

  • How to get on to IoT 2.0, a network dedicated to 20 billion devices.

  • How to build a reliable wired IoT system with noCAN.

  • Plus much, much more

And, Karsten Schulz is one of the Speakers in this Summit. In his presentation, he will talk about the computer processor. In short, he will be showing the inner workings of a computer processor, and how he built his very own from scratch. The result is the B4, a 4-bit processor construction kit that demonstrates key computing concepts, including memory, load, store, addition, and subtraction. This tool is perfect for teaching students, teachers and even makers such as yourselves how a simple computer operates. It even has a virtual companion, the B4 MyComputerBrain simulator, which runs in a browser. The site hosts a series of interactive experiments that lead to a functional 4-bit processor, capable of carrying out basic arithmetic operations. Both of these items are not just concepts or prototypes, but actual products that you can purchase. You can find them in the shop.

If this or any of the other speaker's talks would interest you, register using our affiliate link for the summit here at techexplorations.com/st/summit/registration/?ref=11.

And to help you keep track of it all and get the most out of the event, Peter just released the 2019 Maker Mind Meld Summit Playbook.

>>>Get the Mind Meld Summit Playbook for FREE here: techexplorations.com/st/summit/playbook

 

In science, students conduct experiments during which they measure all sorts of things, such as time, temperature, force, etc. Students collect the data in a table, plot the data points in a cartesian plane and then try to find a curve that best fits the data. For this, they need to formulate a hypothesis. Is the relationship linear? Exponential? Logarithmic? Polynomial? Or perhaps a combination of some of them?

What if an AI could find the best curve and make predictions of what is happening between and outside our data points we have measured in the lab? For our scientist readers: Yes, we are talking about interpolation and extrapolation.

Artificial Neural Networks (ANNs) are essentially multi-dimensional optimisation engines. Let's see if they are up to the job. We start with the experiment to obtain data.

A science experiment

A student investigated the influence of temperature on the reaction rate of hydrogen peroxide and potassium sodium tartrate with the catalyst cobalt chloride. The student measured reaction times three times each for 30, 40, 50, 60, and 70 °C. The experiment didn't finish at 30°C, so the student entered the maximum elapsed time of 1200 seconds = 20 min. The student recorded the data in a table:

The student then charted the data.

AI training data

The training data is easily obtained from the experiment. Internally the ANN works with numbers from 0 to 1, so we need to normalise our data. Some ANNs do this automatically for us, so let's take a quick look at how this works. First, we find the maximum values for temperature and time. From the table, that's 70°C and 1,500 seconds. We want a little bit of margin, and go with 100°C and 2,000 seconds.

We divide all our temperature values by 100 and our time values by 2,000. This leads us to the normalised data, which the AI can work with.

The learning process

Let's run the training process by clicking on the start learning button in the top right corner of our screen.

The training data will be shown to the AI and it will make a series of repeated improvement attempts until the error of the network falls below a preset threshold. Sit back and observe how the red dots, representing the ANN's knowledge base, start moving towards the black dots (our lab data). The video below is a real-time recording. The seconds are a calculated output from the AI and not a progress indicator. The process is fast!

We can now drag the temperature slider to set temperature values between and outside of our experimental data. You can see this as the red curve in the image below. This visualisation helps us to form a hypothesis and, if we want, test this further in the lab. In this particular case, the part of the curve for temperature values above 30°C looks a lot like an exponential function in the general form y=a*e^(-bx).

If you want to try this yourself, head over here: You will need to create a free teacher account, which takes no more than 30 seconds. Student accounts can be purchased from the built-in shop for a reasonable amount of $5.

Let us know what you think.

Until next time,

The Doctor.

Acknowledgement. We want to thank Esther Schulz, year 11 student from Kenmore State High School in Brisbane for conducting the experiment and for allowing us to use her data.

We are always on the lookout for engaging scenarios that teachers and students would appreciate. This one here is a result of a collaboration with our good friends from the Digital Technologies Hub and Apps for Good in the UK.

Sometimes we write and post things on social media in a hurry. Such posts can hurt people, and they feel bullied. Wouldn't it be great if an AI could check our posts as we write them and warn us if they are potentially hurtful?

AI is an excellent candidate for this kind of scenario, as there are so many different ways of combining words into full or partial sentences that are difficult to hardcode with if-then-else statement (branching).

Training data

It won't be too hard for students to come up with a couple of nice and mean things to say and enter them into a table. This will be our training data.

Not sure what to write? Here are a few examples to get you started.

While students type in their training data. The AI gets configured. We felt strongly that this was important to show since the connection between data and the AI is often a bit of a mystery.

When we have entered all the training data, our network looks like the one below. A beautiful 3-layered artificial neural network with one input perceptron for each unique word, 10 perceptrons in the hidden layer and two perceptrons that will tell us whether the word (or words) are kind or mean.

While we see words, the AI sees numbers, slimeball in the image above is represented as a '1', as we can see when we hover over the perceptron in the input layer that is connected to the input word.

Yes, we see binary numbers at work here. We could as well have used decimals, but there was no need. We'll talk about them in a future post.

Let's quickly run the training process by clicking on the start learning button in the top right corner of our screen and then drag the little slider to the right to speed up the process.

The training data will be shown to the AI and it will make a series of repeated improvement attempts until the error of the network falls below a preset threshold. The dial at the bottom-right will eventually turn green when the training process is complete. Voila.

Let's now enter a post for the AI to analyse. We could try you smell so nice. Note how the AI initially considers you smell so as 'mean' until the word nice tips the verdict in favour of 'kind'. Note that this particular example was not part of the training data.

The AI is actually 78% certain that the meaning of the sentence is nice, which we can see when you hover the mouse over the ''Kind things output perceptron.

Btw, these little popup windows are available for every perceptron and allow your students to trace the flow of data throughout the entire network.

Have a go and type in a number of different posts and observe what the AI thinks about them. Remember, it has only seen 12 examples of posts, so its experience is a bit limited. It will especially struggle with irony and sarcasm and of course with any words it doesn't know. But we can see how it will get better and more rounded with bigger datasets and perhaps with a thesaurus to find out about similar words. But overall, not a bad result for 5 minutes of work and zero lines of coding.

If you want to try this yourself? Head over here: You will need to create a free teacher account, which takes no more than 30 seconds. Student accounts can be purchased from the built-in shop for a reasonable amount of $5.

Let us know what you think.

Until next time,

The Doctor.

Acknowledgements. We'd like to thank Apps For Good, UK and the Digital Technologies Hub for their contributions to the inspiration and refinement of this scenario.

Update [17.10.2019]: The lesson plan for this activity, jointly developed by the Digital Technologies Hub and the Digital Technologies Institute, is now available. It can be accessed on the DT Hub Website.

 
Featured Posts
Recent Posts
Archive
Search By Tags
bottom of page