Friday, December 20, 2024

Fab Labs haven't been growing exponentially

The Gershenfelds make the claim that Fab Labs are growing exponentially every 18 months in their 2017 book, Designing Reality (p 11 and pp 100-102) and follow up articles (Digital Fabrication and the Future of Work, 2018)

They actually claim that this new growth is a continuation of Moore’s Law and this fuels their “third digital revolution” rhetoric (book, p.102)

I wish this was true but it’s not.

This rose coloured glasses rhetoric has puzzled me. There remain significant barriers to setting up and maintaining a Fab Lab. The Gershenfelds point out themselves that training Fab Academy alumni to the daunting skill level required follows linear growth.

Here are the figures from their 2018 article:

In this article they speculate that there will be 25,000 Fab Labs by 2026

The Gershenfelds then predict that Fab Lab growth will level off because by 2026 the machines will be so cheap and improved that personal fabrication will replace Fab Labs

The 25,000 Fab Labs prediction corresponds very roughly to 4 doublings over the 10 year period, 2016-2026, ie a doubling every 2.5 years, not 18 months:

(1,300 2,600 5,200 10,400 20,800
or 1300 * 2^4 = 20,800

However, if we go to the Fab Foundation home page, the figure cited there for the number of Fab Labs currently in the world (December 2024) is 2300 +

So the doubling time since 2016 has been 8 years, not 18 months or 2.5 years! Also, as I pointed out in my earlier article, fab transformation hurdles, Fab Lab / Maker Space growth in Australia has stalled

There are plenty of reasons identified in the Gershenfelds book about why Fab Labs haven’t continued to grow exponentially. I think their book contains plenty of realism as well as hype.

But they still maintain their highly optimistic exponential growth rhetoric about digital fabrication. The most recent writing I have found by the three brothers is in 2021 on the "Centre for Bits and Atoms" site, where they say:

Digital fabrication today is at approximately the same stage that digital computation was in the early 1980s, when personal computers gave millions of people access to a capability that had previously been limited to large organizations. PCs were to be followed two decades later by billions of mobile devices and trillions of connected things.

Today we have thousands of fab labs, with the potential for making millions of personal fabricators — small-scale fabrication systems for individual use — and a research road map leading to a future with billions of universal assemblers, and then trillions of self-assembling systems in future decades. As with the exponential improvements of the earlier digital technologies, each of these stages of development promises to be faster, better, and cheaper.
- The Promise of Self-Sufficient Production

In all our arguments and discussions we need to avoid the hype cycle rhetoric.

Nevertheless, community Fab Labs and school based Fab Learn Labs are still great things with enormous potential IMHO. I have outlined some of the reasons why in my earlier article fab transformation hurdles

As the authors say in their original book digital fabrication is both hard and rewarding. This quote sums it up:

"Digital fabrication is hard. It introduces a set of new competencies, including the navigation of continually evolving CAD and CAM software as well as additive and subtractive hardware, embedded computing, and an understanding of the biological and chemical properties of the materials used in fabrication. It also requires design thinking, creativity, collaboration, problem solving and resiliency. These all require knowledge, skills and mindsets that cross very different disciplines and domains and, as a result, are not currently well integrated. We define fab literacy as the social and technical competencies necessary for leveraging digital fabrication technologies to accomplish personally and professional meaningful goals, as well as a commitment to the responsible use of the technologies. We cannot build towards a more self sufficient, interconnected, and sustainable society without widespread fab literacy." (p. 64)

Tuesday, December 17, 2024

fab transformation hurdles

Background reading: Designing Reality: How to Survive and Thrive in the Third Digital Revolution by Neil Gershenfeld, Alan Gershenfeld and Joel Cutcher-Gershenfeld.

Some write their stories in words. Some write their stories in code; some with materials; some with machines. My current story is a wobbly exploration through all these media to understand the Fab Lab.

Neil Gershenfeld has articulated his Fab Lab vision now for 2 decades: “How to make (almost) anything”. After a brief revisit of what a Fab Lab is this article outlines some of the hurdles that have to be overcome to achieve that vision.

The Gershenfeld interview with Lex Fridman was fascinating IMO

Digital computation has led to the smart phone. Digital communication has led to the Internet. These first two digital revolutions have created new jobs and transformed traditional jobs. Will digital fabrication continue this trend. Is it correct to claim, as the Gershenfelds do, that digital fabrication is the third digital revolution? See Footnote.

What is a fab lab? Digital fabrication is often misunderstood in that people think of it as being just 3D printing. It actually involves a wide range of additive and subtractive technologies, as well as computer-aided design and embedded electronics

The five types of machines found in a conventional fab lab are:

  • Vinyl cutter
  • Laser cutter
  • 3D printer
  • CNC machines
  • Digital Embroidery machines

The MIT course that Neil Gershenfeld initiated in 2003 named “How to make (almost) anything” was a huge hit which led to the creation of Fab Labs around the world. An inspirational slogan!

What sort of things can we make? Well in theory the list goes food, furniture, and crafts to computers, houses, and cars.

What is the overall goal here? The short term killer app is personal fabrication, the ability to make or modify what you can't or can buy in a store. Personal fabrication can take many forms since it depends on each person. My 3D printed personal favourite so far is the Sierpinski Pyramid Lamp.

One possible social goal is to transform consumers into producers. The Gershenfelds approve the Blair Evans vision:

A potential vision for this new blend is represented in the inspiring work of Blair Evans, an accomplished automo­tive engineer and educational leader who is now developing a local ecosystem of fab labs in an economically distressed part of Detroit. His vision is about what he calls “thirds”— building out the digital fabrication capability to the point that people might spend one-third of their time in paid labor to buy what they can’t make, one-third of their time using digital fabrication facilities to make what they can (with a focus on furniture, housing, aquaponic food pro­duction, and other practical things), and one-third of their time to follow their passions in whatever way they choose
- Digital Fabrication and the Future of Work

However, in practice, what you can presently make depends on whether your local Fab Lab has million dollar machines or thousand dollar machines. In practice many fab labs can make very cool small things (eg. an articulated dragon on a 3D printer) but are not making big things. To make the bigger things you would need the big machines, like a CNC milling machine. Yes, the price will drop and access will improve over time. But for now it depends on where you live.

Not every Fab Lab or project ends in success. Neil’s brothers, Alan and Joel Cutcher-Gershenfeld sometimes play the devils advocate in their book. When things don’t turn out the inspirational slogan “How to make (almost) anything” transforms into “How to (almost) make anything”

A case in point. I tried for 3 years to initiate a Fab Lab in Alice Springs. See my 2021 article Your town needs a community Fab Lab

I lobbied government, industry leaders, education leaders and citizens there. But to no avail. My calls were sometimes not returned and in the instances where interest was initially shown it never led anywhere significant.

I did have more success in introducing new innovative subjects and 3D printer technology at the school where I taught. The admin could see the need for a more engaging STEM or STEAM curriculum. But at no stage was I offered the opportunity to explain Neil Gershenfeld’s full Bits to Atoms vision. It felt like being at a banquet but only allowed to eat the grapes.

My failure to kick start a community Fab Lab in Alice Springs could be put down to my poor persuasive powers. However, it might also have been due to deficiencies of the local ecosystem, a troubled town of 25,000 people, to nurture innovation. Neil Gershenfeld points out that MIT isn’t an isolated technology park but is embedded in an ecosystem or environment “that mixes long-term research, short-term development, small start-ups and large corporation, along with cafes, clubs and parks” (p.49)

Furthermore, I notice that Fab Labs are not growing exponentially in Australia, unlike some other countries. On the contrary, if you look at the map some Australian Fab Labs have become inactive (Perth, Ballarat, Sydney). As the two sometimes critical brothers point out, “Digital Fabrication is hard” (p.64)

So, in this article, I want to discuss the hurdles as well as the tremendous potential of setting up a Fab Lab. In 2024, I moved back to Adelaide so will reference the Maker Spaces here.

A Fab Lab needs machines, software, spaces and people who understand (mentors, volunteers)

Space is a huge issue. There are two maker spaces in Adelaide. The Adelaide Maker Space has a huge space in the basement of the WEA Building. The Parks Library Maker Space is part of the library system and has only a smallish room, which does restrict things.

Machines: I listed the 5 types of machines above. An important issue here is enough commonality to allow for interoperability between different Labs around the world. From my reading the most popular machine is the laser cutter. The problem with 3D printers is that they are slow. I noted with interest that Neil Gershenfeld’s favourite machine is the CNC miller. The Fab Foundation site has a page where they specify how to get started and their ideal Fab Lab. For those interested in starting or understanding a Fab Lab there are lots of important details on that page.

Software: Free and Open Source Softwar (FOSS, eg. Inkscape for 2D vector graphics design) lowers the barrier to interoperability but this is not always possible. I’ve noticed some comments in the book (eg. from Nadya Peek, p. 73) about the need to improve CAD / CAM software to make it more intuitive for users

Network effects aka Metcalfes law: the value of a computer connected to the Internet is proportional to the square of the number of computers in the network.

When the digital fabrication hardware and software is interoperable across locations, it enables network effects, greatly accelerating the innovation in a way that is not possible with analog fabrication. I'm wondering if the Maker Spaces in Adelaide could exploit this more. For example, one thing that has surprised me is that although it is very easy to find free 3D print designs online (thingiverse etc.) it is not easy to find laser cut designs. If this global sharing of designs which is embedded in the Fab Lab charter is a reality then why are laser cut designs hard to find?

In this sense digital fabrication is revolutionary but only when linked to the earlier digital revolutions of computation and communication. I have a sense that the Australian Fab Labs are operating too much in isolation from each other and the world global movement.

People: The Adelaide maker space in the WEA basement is staffed entirely by volunteers. This surprised me but it seems to be working. There are induction sessions to get started on particular machines, projects or rooms. There isn’t a formal ongoing mentoring system. If people are stuck then they can ask a volunteer for help. This often works but not always. eg. I had a problem where the laser cutter simply stalled at the start which neither I nor the volunteer could solve.

The Parks library is staffed by a couple of paid workers who are expert makers. They have an induction system and you can make appointments if you need to skill up in a particular area.

Community: I spoke above about killer apps and how I made a Sierpinski Pyramid Lamp. Another way to look at this is about "must haves". What "must haves" do Fab Labs offer? The Blair vision of making one third of your consumables in a Fab Lab may be achievable in the futue but not in the present. The Gershenfelds argue a strong point here: that one of the "must haves" is the sense of community attained through the meeting and making process (p. 77 and 81)

Fab literacy and the Fab Academy: Given that expert people are the main limiting factor for Fab Lab expansion the Fab Academy runs a 24 week course to train people. I’ve had a look at this course and find it quite daunting. There are only two places in Australia where you can complete this course:

Course details (look here to understand why I find it daunting):

Neil Gershenfeld calls this a distributed learning model (a hybrid between F2F and online learning, since part of the learning is done socially at a FabLab). Online MOOCs are notorious for their high drop out rates so it’s an improvement on that model.

Money: Sherry Lassiter from the Fab Foundation estimates that the average budget for launching a community fab lab and running it for 2 years is $250,000. (p. 76)

The Adelaide maker space has various sponsors, scroll down to the bottom of their home page. They have membership fees and fees for visits, inductions and workshops for those who aren’t members.

Neil Gershenfeld has some interesting discussion about who pays on page 42 of the book. He says that selling things made in the lab doesn't work partly because Fab Labs are not setup to make things at scale. He goes on to point out that enlightened government can utilise Fab Labs to help disadvantaged youth stay out of trouble, that is a better option than what happened in Alice Springs (lock 'em up and get more police).

Philosophy: The how, what and why all need to be addressed. "How to make (almost) anything" implies that users have open slather on the what. But in practice that depends on their expertise. Learning works best when the users make something that is personally or socially meaningful, the why. The how is mastery of all the hardware and software which is a big task. But to focus only on that would be a mistake.

Conclusions:
  1. How to make many interesting things is not as inspirational as How to make (almost) anything but is more realistic at this stage
  2. Third digital revolution and turning consumers into producers are probably over hyping the case
  3. Fab Labs / Maker Spaces can have many great outcomes: rapid prototyping, training ground in useful skills for all and joyful community development for starters
  4. The future is bright since the technology will continue to improve, become more user friendly and cheaper
  5. Australian Fab Labs / Maker Spaces need to tap more into the global movement by sharing their designs (open source philosophy)
FOOTNOTES:

I tend to agree with this amazon reviewer that all the Gershenfelds are wearing rose coloured glasses with their "third digital revolution" rhetoric. Note, however, that in a 2018 article they said that exponential growth of Fab Labs would die out by 2025:

If you want to be proselytized about fab labs, this is the book for you. A key premise is that an analogy of Moore's law will (or should?) apply to digital fabrication. This is based on a few years of doubling of the number of fab labs out there. Moore's original paper was based on 10 years of data but the trend there has continued for 50 years. If that holds for fabbing, yeah, it'll change the world bigtime. But the case has yet to be made. I liked that the Gershenfeld brothers wrote different chapters of the book, with quite different life experiences they bring different perspectives. But it's all based on that exponential premise, one that I'm quite skeptical about. The last of Neil's chapters envisions how fabbing might eventually get to assembling very tiny parts so you could really make anything, but this is almost laughably sketchy and technically infeasible. There's something called chemistry that Neil doesn't seem to be paying attention to. Still, fabbing is a fascinating new technology with lots of possibilities and this book will give you a good feel for how it's affecting some people's lives. There are some good stories mixed in with the questionable extrapolation of trends.

Saturday, December 07, 2024

making a 3D mesh with tinkercad

We are going to make this starter mesh, the triangle interweaves with the hexagon and all parts can be moved around. Once you know how to make this starter it will be possible to extend the mesh further.

My over arching reference here is a wonderful article by Jose Antonio, How to Design and 3D Print Flexible Meshes, who explains in detail the conditions under which 3D printers can bridge, ie. print over thin air!

You can use Tinkercad Codeblocks and Tinkercad 3D designs to create flexible meshes

Open Tinkercad https://www.tinkercad.com/ and load Codeblocks

We are going to make this hexagon prism with cut away holes:

Here is the annotated code for the hexagonal base and frame. I've stolen Jose's code and added some annotations to explain it.

Export the object as a Shape: Export > Shape.

Then open Tinkercad 3D designs

Your Codeblocks creation can be loaded from the Shapes library > Your Creations

“You won't be able to take it apart like a grouped 3d design, but you can drag and use it as a design component in any 3D design”

I named my new tinkercad file hex_prep

Next, we will make a triangle to fit into the holes in the hexagon. I worked out the dimensions of the triangle from one of the links in Jose's article:

Using these dimensions I made this code to make the triangle:

I rounded those measurements in Codeblocks

To find the scale factor for the inner triangular hole:

Scale = (22 – 2.2 )/ 22 = 19.8 / 22 = 0.9

After applying this I then found I had to move the triangular hole 0.9mm up the Y axis to obtain equal thickness sides of the triangle. This was arbitrary guess work, if you can explain where I miscalculated let me know please! The problem is that the triangular hole does not centre.

Name it tri or triangle and export as an STL

Then import into your tinkercad hex_prep file

We are aiming to make this:

Make two duplicates of the hexagon Ctrl + D then rotate them so they are lined up. To fine tune rotation use the outer ring and hold down the mouse button

You now have to position the 3 hexagons and triangle correctly

Some Tips:

  • When necessary hold down the mouse and use outer ring for rotation (more fine control)
  • At the right time set Snap Grid to 0.1 mm
  • Ctrl + up arrow to raise the triangle. Raise the triangle by 0.1 mm at a time until you see a clear airgap, both below and above the triangle.

Important: There has to be a visible airgap above and below the triangle!

Print this one as a trial before moving on

  • Select and group
  • Export as STL
  • Load into your slicer and make the GCODE

Then a miracle occurs! The triangle is printing on thin air!

Here's a mesh that has been extended further

Wednesday, December 04, 2024

my job application cover letter

MY PREFERRED LEARNING APPROACH: 21stC MAKER EDUCATION

Where I claim to have a deeply thought out, innovative approach to learning and teaching:

The three game changers of 21st learning are block coding, micro-controllers and the Fabrication Lab. These can be integrated in various ways. Bits to Atoms; Atoms to Bits. I agree with Neil Gershenfeld that this is the path to the third digital revolution.

My background qualifications are in Science (specialisation in Chemistry). But when computers came along I became an early adopter, particularly of the educational computing language logo (which has since evolved into Scratch). I was persuaded by Seymour Papert’s book “Mindstorms” that logo offered a more engaging way to teach maths, as well as computer science. Fast forward to today and Scratch coding has become a multi media story telling fun machine accessible to nearly all students. Sadly, this still isn’t appreciated by many teachers and school leaders. Block coding (be it Scratch, Makecode or SNAP!) is accessible and enjoyable for 95% plus of students. I call this “the wider walls”.

Seymour Papert was also an educational theorist. He spent some time collaborating with Piaget and developed the constructionist theory, a portmanteau of Piaget’s ‘constructivism’ (learners develop their own meanings) and ‘construction’ (we learn by building meaningful things). I am one of those rare teachers who studies educational theories and selected PhD theses. The constructionist approach continues to grow today led by Neil Gershenfeld (community Fab Lab), Paulo Blikstein (schools Fab Learn Lab) and others (Gary Stager, Sylvia Martinez, Yasmin Kafai, Josh Burker, Mitch Resnick, Brian Silverman etc.). This approach informs us how to design learning environments which promote highly engaging, self directed powerful learning.

However, I am not a one eyed constructionist. After years of searching I was surprised to find Diana Laurillard's integration of a range of different learning theories. Her approach is The Conversational Framework which combines Instructionism, Constructionism, Social-cultural learning and Collaborative learning into a meaningful whole. As well as learning by meaningful building, the social learning and collaboration has to be built into the programme (as well as essential Instruction).

In 2021 I was invited by Gary Stager to contribute to a book, 20 Things to Do with a Computer: Forward 50 commemorating the 50th anniversary of the seminal paper by Cynthia Solomon and Seymour Papert, “Twenty Things to Do with a Computer.” My contributing article was titled The Wider Walls, which developed the theme of making computational thinking available to all.

A good theory informs and improves practice. However, as well as the right software we also need the right hardware. Furthermore, creative imagination is another essential ingredient.

In my previous school I helped develop a subject called Artbotics (Craftbotics would have been a better name) where students made their own creations, mainly with cardboard, and controlled them in various ways (motion, lights, sound) with the Circuit Playground Express micro-controller, an appropriate alternative to the microbit.

The micro:bit is the ideal micro-controller. It is far more accessible than the Arduino because it runs on block code (MakeCode) and has controllers on the board.

STEM is a reasonable acronym but it’s better to extend it to STEAM with the amazing Turtle Art software and other ideas. For example, I accomplished this in a subject named Inventiveness. This took the form of the “Turtle Art Tile Project”. Students used Turtle Art to design attractive geometric shapes, then 3D printed them, then impressed them into clay and finally painted and glazed them.

Perhaps my most successful recent work was when my school experimented with the curriculum to setup my Year 8 class with extended learning times (2, 3 or 4 hour lessons became the norm). I created a subject called Fabulous Fabrication where students worked in groups to develop their own designs assisted with the microbit and 3D printing technology. The proof of the pudding here was that students repeatedly asked me if they could keep working through their recess and lunch breaks.

I am always on the lookout for new approaches which combine learning with modern technology. As well as the software and hardware mentioned above I have also experimented with Game Maker, 3D printer construction from kits, laser cutters, Hummingbird bit, ELECFREAKS kits, Spike LEGO, Makey Makey, TapeBlock (electric circuits for the disabled), drones, Raspberry Pi, Wolfram Alpha, CAD software (Tinkercad including CodeBlocks, Inkscape, Lightburn), microBlocks and a range of IDEs (VS Code, Jupyter) and Python libraries (Pygame, Matplotlib, Pandas, Django, Pytorch ...).

I have documented these and other learning experiences on my blog (https://billkerr2.blogspot.com/) Here is one sample article which outlines my preferred approach in more detail: Bits-and-Atoms-part-one

That is possibly enough to flesh out my dot pointed CV, the aim here was to provide more of a big picture overview of my educational philosophy, theory and practice.

Wednesday, November 06, 2024

engraving a photo with a laser cutter

Starting with:

First up, this requires some awareness of the difference between raster and vector images

Raster files are composed of pixels., eg. a normal photograph. In this case the format was JPG. Vector files use mathematical equations to make geometric chapes, lines, and curves with fixed points on a grid to produce an image. A common vector format is SVG.

Raster images are challenging and require more preparation before engraving or etching them with a laser cutter. You need to start with a photo with lots of contrast, eg. with a pure white background and pale clothes to contrast darkish hair (or trim the photo as I did).

One technique is error diffusion raster (Stucki, Jarvis, FloydSteinberg). The darker the grayscale value, the denser the points are set. The point size remains unchanged.

I'm a GIMP user (open source), not a Photoshop user. For my needs I found a couple of good tutorials online:

GIMP tutorial
  • Scale image to 300 px/inch
  • Convert to black and white
  • Histogram adjusted for good range of dark and light areas
  • Adjust colour levels histogram towards the centre
  • Adjust colour curves to boost the contrast
  • Apply sharpening

After the GIMP tutorial I then applied this YouTube Lightburn tutorial:

  • import and select the image
  • tools > adjust image Alt + I
  • Best image mode > Jarvis or Stucki
  • DPI – approx 300
  • To sharpen:
    • Enhance radius: 25
    • Enhance amount: 100
  • Lighten the face and remove background dots:
    • Brightness: 7

Note that the Preview normally looks poor and can be safely ignored! But you can get the time estimate through Preview.

After treatment with the two tutorials:
Laser cut version onto MDF (Medium Density Fibreboard). The contrast is OK but could be improved around the eyes and mouth:

Thursday, October 31, 2024

Sailboat shelf laser cut

Here's another laser cutting project from the DXFdownloads site: Sailboat shelf. The design in lightburn looks like this:
As usual cutting the design on the Adelaide Maker Space Thunder was straightforward.

Then came the task of how to support the shelf. My friend Pat helped again by suggesting a second smaller 3mm shelf with some 45 degree angular supports.

This was, you would think, a simple design but being a very rusty inkscape user I had to view some tutorials and also look online for hints. Finally I found the idea of putting some rectangular design pieces together and then tracing around them with the bezier tool!

To finish of required a little PVA wood working glue.

Tuesday, October 15, 2024

finger spinners laser cut

I'm still pinching interesting designs off the web. I signed up to DXFdownloads. Lots of adverts (annoying) but they do have some interesting free downloads. The DXF format can be imported into Lightburn (and Inkscape).

The finger spinner design loaded in Lightburn looked like this:

Quite confusing. But it turned out that this was three different versions of the spinner. Thanks for the help from George and Pat for sorting this out.

The construction took a while since it required gluing many fasteners into place. So far I've constructed two out of three of the finger spinners. Some of the fasteners fell through the laser cutter grid and I couldn't retrieve them all.

Anyway, they spin nicely! A satisfying project!

update 18/10/24: Cut some more joiners and completed the third spinner, which again spins nicely!

Friday, October 11, 2024

mechanical iris laser cut

I’m still on a laser cutter learning curve. Not ready to do my own complex designs yet so am searching the web for interesting designs done by others.

I found a free, mechanical iris Creative Commons SVG design on this Maker Design Lab site and gave it a go. Thanks to Theresa Wasinger for generously sharing.

Simple design (using Lightburn software)

Note that you will need a wooden dowel to hold the pieces together, a bamboo skewer will do the trick.

More artistic design

For this one you need a method to locate the tiny joiners if they fall through the laser cutter grid. eg. sweep out the scrap collecting try before the cut.

It's tricky connecting the top layer. A pair of tweezers to compress the joiner ends helped here.

What have I learnt so far: That setting up with Lightburn and doing the cutting is straightforward. I'm working at the Adelaide Maker Space and they have a great setup there. Putting the pieces together after the cutting is fiddly but doable. A friend has been helping me with that (thanks Pat!).

Wednesday, October 09, 2024

Four guidelines to become anti-fragile

Notes on this video Nassim Taleb - 4 Rules To Become Antifragile (For A Better Life)

1) Do hard things (Embrace adversities)
Post traumatic growth is possible, rather than Post traumatic stress
"The blazing fire makes flame and brightness out of everything thrown into it"
- Marcus Aurelius
"I finally got being a good startup founder down to two words - relentlessly resourceful"
- Paul Graham
When luck gives you lemons, you make lemonade

2) Go through life as a Flaneur. My translation of this is Be Adventurous.
Flaneur: Someone who unlike a tourist, makes a decision opportunistically at every step to revise his schedule (or his destination) so he can imbibe things based on new information obtained.

An adventurer likes disorder, has a different mindset than a tourist. Adventurers welcome uncertainty.

If you know in the morning what your day looks like with any precision, you are a little bit dead - the more precision, the more dead you are.

3) Develop an anti-education. My translation here is Follow your curiousity and passions
Great scientists like Darwin and Einstein didn't like the education system. It kills creativity. It's better to read a lot of books. Focus on things that are really important to you.
An apprenticeship is more important than an academic education

"Only the autodidacts are free"

4) Develop an anti-fragile life philosopy. My translation here is develop a Stoic philosophy (Marcus Aurelius, Seneca)
Keep the upside; don't be hurt by the downside
A Stoic is someone who transforms:
  • Fear into Prudence
  • Pain into Information
  • Mistakes into Initiation
  • Desires into Undertaking

Sunday, September 29, 2024

initial laser cuts

After my ruptured achilles accident I had to relocate from Alice Springs to Adelaide for surgery. Adelaide has an excellent Maker Space facility so I've joined up and am learning how to use one of their laser cutters.

Last year, as part of an Inventiveness course, I did these two as 3D prints (here) to make painted tiles. So, interesting to compare the 3D print version with a laser cut engraved version.

One of the helpful people at Maker Space gave me a link to a great site about making boxes. Here are some of the products:
Hinged box
Flex box
Shutter box

Then I received a request to make the dungeon and dragons dice box on that site. I then added a couple of dragon images obtained from the Free SVG site. This worked well but there was some burn marks on the surface. Perhaps I can find a work around to eliminate this?

update 6/10/24
Escher
Bear on acrylic

Sunday, July 21, 2024

The Conversational Framework

- A brief introduction
- the Diana Laurillard section is partly plagarized!!
- click on Diana's diagrams to see them more clearly
- bit of a preamble first before I get to The Conversational Framework.

I’ve been looking for some time, on and off, for a resolution of a problem I’ve had with learning theory.

Sometimes I am this teacher: just do what I think will work and also, as a bonus, is also interesting. Those things that might work vary from year to year depending on the school environment which varies a lot, school to school.

Sometimes I am this teacher: A teacher who has studied a lot of different learning theories and am still surprised about how little interested in learning theories that other teachers seem to be. Some staff and schools hardly ever discuss learning theory.

For most teachers practice is primary. Find something that works. And then for those who try out different approaches – which seems necessary because students cohorts vary widely – it becomes apparent after a while that there is no magic bullet. There is no unified learning theory.

I’ve had a long time interest in learning theories which may have originated from the traditional teaching method – instruction based around a textbook – being so uninspiring.

Early on (the 1980s) I cottoned onto Seymour Papert’s Constructionism after reading his book “Mindstorms”. This got me interested in computers through the Logo coding language which promised to make maths more engaging. The challenge of Constructionism was for the teacher to create an engaging microworld where the student would learn without being formally taught. Turtle geometry was one such microworld. I’ve spent time exploring other microworlds and found some of them to be very successful, eg. recently, the Turtle Art tile project developed by Josh Burker.

Seymour did setup an ideological polemic of Constructionism (intrinsic learning in a computer generated microworld) versus Instructionism (traditional school based) and in his second book “The Children’s Machine” more or less called for the overthrow of traditional schooling.

This was fine by me and where possible I modelled my teaching along the lines he suggested.

However, when I taught at a disadvantaged school in the northern suburbs Adelaide (Paralowie R12) I found that many of the students had missed out so much from before they went to school that they needed lots of instruction to fill in the many gaps in their knowledge.

This created an epistemological crisis for me (a Skinner moment!) which after some agonising I resolved my deciding that teachers needed both Constructionism and Instructionism and awareness of when to use them. Walk the walk along the spectrum of learning theories.

So for years this went on. I would delve into different learning theories and extract what I saw as useful things from each of them. There are some great learning theorists out there IMO and many of them have valuable things to offer. Probably best to provide details of this some other time.

Recently, my interest in learning theory was reawakened. It’s shocking to say but in schools there is very little discussion of learning theories! But what happened in my school is that there was a problem with quite a few year 7s and 8 engagement. So the school hired a learning consultant to fix things up. In my opinion, the theories he talked about were often not the best ones. But anyway it did induce me to start exploring learning theories again.

I’m one of those strange people who reads conference papers and PhDs for fun. Luckily for me Diana Laurillard had been invited to present a key note to the Constructionist conference in Ireland 2020. I really liked her approach because she saw the distinctive strengths of Constructionism but also saw it as not the whole deal. It was part of the jigsaw, albeit a big part, with her whole learning jigsaw being made up of Instructionism, Constructionism, Social-cultural learning and Collaborative learning. We could call these the Big Four. There are others too but those four cover most of the ground I want to cover.

Her framework, which integrates these theories, is called The Conversational Framework. I think the way she presents it can be used as a guide for teachers to develop engaging lessons for students which covers most of the ways in which learning occurs. This could be a formal development process. There is an online website for doing that (see References). But I’m using it here as a self check that I’m offering all of these different ways of learning to students. And as a justification that my preferred ways of teaching are supported by learning theory. It's a big step up from "Walk the walk along the spectrum from Constructionism to Instructionism"

I won’t attempt a detailed explanation or historical origins of Diana’s whole framework (best to read her originals for that, see references) but rather introduce some of her marvellous schematics and argue a claim that my preferred way of teaching does cover all of the methods she recommends.

So the 6 Learning Types are Acquisition, Inquiring, Producing, Discussing, Practising and Production. All of them will be explained somewhere in this article.

learning through acquisition: the teacher (human, book, website, etc) communicates (one-way) concepts and ideas, and the learner reads, watches or listens

learning through investigation’ or ‘inquiry’: the teacher asks learners to explore or question the teacher's concepts (two-way). In this case they generate their own ideas of what they want to know.

learning through practice: the learner uses the learning environment set up by the teacher to create exercises for the learners. Ideally it includes a goal, the means for learners to put their concepts into practice to achieve it, feedback on their action, and the opportunity to revise and improve it.

Learning through practice may be guided, with extrinsic feedback, or unguided (after the teacher has setup the learning environment), with instrinsic feedback. Here's another of Diana's diagrams to help explain this:

"This is why Papert could say that constructionist exercises enabled learning without a teacher. The teacher, in the form or a person, or a computer program running a multiple choice exercise, is not needed to comment or inform. The microworld, like the real world in the right context, can provide the ‘informational feedback’ the learner needs"

Learning through Discussion: questions and answers including through peers (social construction of ideas)

However, learning through collaboration is more demanding than simple ‘discussion’ in the top right-hand corner, as Figure 4 shows, because the learners are necessarily collaborating on constructing something together: that is the nature of collaboration. It involves Q&A, shared practice, tinkering / debugging / repeated iterations. The teacher may play no role at all.

Here, each learner is learning through practice by using the learning environment. And at the same time, they are discussing and sharing that practice. In order to do that, they are necessarily also linking the two, which helps them develop both concepts and practice with each other. The teacher need play no role at all, and yet there is a lot of active internal processing required of the learner during this process.

Learning through production: Here the learner must connect up concepts and practice, and then produce an essay, or performance, or design, or presentation to show what they have learned. Throughout this process the learner is actively processing both concepts and practice and the integration of the two. This is akin to what Papert referred to as constructing personally meaningful and shareable artefacts, where the sharing is part of the motivation to construct a successful artefact.

The main issue for a teacher is to be aware of the full range, and the extent to which their teaching embraces all these different types of conversation, between teacher and learner, learner and peers, and on the levels of both concepts and practice.

Constructionism is represented best through four of the 6 learning types. Learning through acquisition, and inquiry are not a particular focus. The role of the teacher is still critical, however, as it is a real design challenge to generate and modulate the learning environment that could achieve learning without a teacher. Very few achieve that as most rely greatly on the teacher to provide additional guidance and feedback. The teacher will also be the recipient of the artefacts produced by a constructionist pedagogy, able to use these for judging the value of it as a learning process.

Putting all these pedagogic approaches together defines the superset of essential requirements for supporting the learning process, a ‘Conversational Framework’, as shown in Figure 5 (Laurillard, 2002). The full framework embraces all the elements prioritized by each of the main pedagogic approaches, and demonstrates the complexity of what it takes to learn: a continual iteration between teachers and learners, and between the levels of theory and practice. It is not symmetrical: the teacher is privileged as defining the conception and designing the practice environment to match. The teacher also learns, from receiving learners’ questions and products, as well as reflecting on their performance. But teachers are learning about teaching, rather than learning about the concept or practicing the skill.

REFERENCE

Diana Laurillard. Profile

Diana Laurillard. Significance of Constructionism as a distinctive pedagogy (2020)
In Constructionism 2020 conference proceedings (Ireland), pp. 29-37

Diana Laurillard. The pedagogical challenges to collaborative technologies (2009)
In Computer supported collaborative learning

Diana Laurillard. Teaching as a Design Science: Building Pedagogical Patterns for Learning and Technology (2012). This book was too expensive at $200! link, but then I found it at anna's archive.

Applying the Conversational Framework using an online learning design tool
Diana Laurillard talks through how to use a free online learning design tool which applies the Conversational Framework to build courses using the six key learning types

Learning Designer
At this site you need to sign up and login. It then lets you design your own lessons using The Conversational Framework.

Tuesday, July 02, 2024

messy AI milestones

For me it is VERY useful to have a list of AI milestones with dates. This defines the ball park which is much, much bigger than ChatGPT. It provides a framework which helps inform future focus. The comments I've added are there as a self guide to future research. So, they often do hint at my favourites.

Keep in mind that there are at least four different types of AI: Symbolic, Neural Networks aka Connectionist, Traditional Robots and Behavioural Robotics, as well as hybrids. For some events in the timeline it is easy to map to the AI type but for others it is not so easy.

1943: Warren McCulloch, a neurophysiologist, and Walter Pitts, a logician, teamed up to develop a mathematical model of an artificial neuron. In their paper "A Logical Calculus of the Ideas Immanent in Nervous Activity" they declared that:
Because of the “all-or-none” character of nervous activity, neural events and the relations among them can be treated by means of propositional logic. It is found that the behavior of every net can be described in these terms.
1950: Alan Turing publishes “Computer Machinery and Intelligence” (‘The Imitation Game’ later known as the Turing Test)
1952: Arthur Samuel implemented a program that could play checkers against a human opponent

1954: Marvin Minsky submitted his Ph.D. thesis in Princeton in 1954, titled Theory of Neural-Analog Reinforcement Systems and its Application to the Brain-Model Problem; two years later Minsky had abandoned this approach and was a leader in the symbolic approach at Dartmouth.

1956: Dartmouth Workshop organised by John McCarthy coined the term Artificial Intelligence. He said would explore the hypothesis that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."
The main descriptor for the favoured approach was Symbolist: based on logical reasoning with symbols. Later this approach was often referred to as GOFAI or Good Old Fashioned AI.

Knowledge can be represented by a set of rules, and computer programs can use logic to manipulate that knowledge. Leading symbolists Allen Newell and Herbert Simon argued that if a symbolic system had enough structured facts and premises, the aggregation would eventually produce broad intelligence.

Marvin Minsky, Allen Newell and Herb Simon, together with John McCarthy, set the research agenda for machine intelligence for the next 30 years. All were inspired by earlier work by Alan Turing, Claude Shannon and Norbert Weiner on tree search for playing chess. From this workshop, tree search — for game playing, for proving theorems, for reasoning, for perceptual processes such as vision and speech and for learning — became the dominant mode of thought.

1957: Connectionists: Frank Rosenblatt invents the perceptron, a system which paves the way for modern neural networks
The connectionists, inspired by biology, worked on "artificial neural networks" that would take in information and make sense of it themselves. The pioneering example was the perceptron, an experimental machine built by the Cornell psychologist Frank Rosenblatt with funding from the U.S. Navy. It had 400 light sensors that together acted as a retina, feeding information to about 1,000 "neurons" that did the processing and produced a single output. In 1958, a New York Times article quoted Rosenblatt as saying that "the machine would be the first device to think as the human brain."

Perceptrons were critiqued as very limited in what they could achieve by the symbolic advocates Minsky & Papert in their book Perceptrons. The Symbolists won this funding battle.

1959: John McCarthy noted the value of commonsense knowledge in his pioneering paper "Programs with Common Sense" [McCarthy1959]

1959:Arthur Samuel published a paper titled “Some Studies in Machine Learning Using the Game of Checkers”⁠2, the first time the phrase “Machine Learning” was used–earlier there had been models of learning machines, but this was a more general concept

1960: Frank Rosenblatt published results from his hardware Mark I Perceptron, a simple model of a single neuron, and tried to formalize what it was learning.

1960: Donald Michie himself built a machine that could learn to play the game of tic-tac-toe (Noughts and Crosses in British English) from 304 matchboxes, small rectangular boxes which were the containers for matches, and which had an outer cover and a sliding inner box to hold the matches. He put a label on one end of each of these sliding boxes, and carefully filled them with precise numbers of colored beads. With the help of a human operator, mindlessly following some simple rules, he had a machine that could not only play tic-tac-toe but could learn to get better at it.

He called his machine MENACE, for Matchbox Educable Noughts And Crosses Engine, and published⁠5 a report on it in 1961

1960s: Symbolic AI in the 1960s was able to successfully simulate the process of high-level reasoning, including logical deduction, algebra, geometry, spatial reasoning and means-ends analysis, all of them in precise English sentences, just like the ones humans used when they reasoned. Many observers, including philosophers, psychologists and the AI researchers themselves became convinced that they had captured the essential features of intelligence. This was not just hubris or speculation -- this was entailed by rationalism. If it was not true, then it brings into question a large part of the entire Western philosophical tradition.

Continental philosophy, which included Nietzsche, Husserl, Heidegger and others, rejected rationalism and argued that our high-level reasoning was limited, prone to error, and that most of our abilities come from our intuitions, our culture, and from our instinctive feel for the situation. Philosophers who were familiar with this tradition were the first to criticize GOFAI (Good Old Fashioned AI) and the assertion that it was sufficient for intelligence, such as Hubert Dreyfus and Haugeland.

1963: First PhD about computer vision by Larry Roberts MIT

1963: (1985) The philosopher John Haugeland in his 1985 book "Artificial Intelligence: The Very Idea" asked these two questions:
  • Can GOFAI produce human level artificial intelligence in a machine?
  • Is GOFAI the primary method that brains use to display intelligence?
AI founder Herbert A. Simon speculated in 1963 that the answers to both these questions was "yes". His evidence was the performance of programs he had co-written, such as Logic Theorist and the General Problem Solver, and his psychological research on human problem solving.

1966: Joseph Weizenbaum creates the Eliza Chatbot, an early example of natural language processing.
1967: MIT professor Marvin Minsky wrote: "Within a generation...the problem of creating 'artificial intelligence' will be substantially solved."

1968: Origin of Traditional Robotics: an approach to Artificial Intelligence by Donald Pieper, "The Kinematics of Manipulators Under Computer Control", at the Stanford Artificial Intelligence Laboratory (SAIL) in 1968.

1969-71: The classical AI "blocksworld" system SHRLDU, designed by Terry Winograd (mentor to Google founders Larry Page and Sergey Brin) revolved around an internal, updatable cognitive model of the world, that represented the software's understanding of the locations and properties of a set of stacked physical objects (Winograd,1971). SHRDLU carried on a simple dialog (via teletype) with a user, about a small world of objects (the BLOCKS world) shown on an early display screen (DEC-340 attached to a PDP-6 computer)

1979: Hans Moravec builds the Stanford Cart, one of the first autonomous vehicles (outdoor capable)

1980s: Back propagation and multi layer networks used in neural nets (only 2 or 3 layers)

1980s: Rule based Expert Systems, a more heuristic form of logical reasoning with symbols encoded the knowledge of a particular discipline, such as law or medicine

1984: Douglas Lenat (1950-2023) began work on a project he named Cyc that aimed to encode common sense in a machine. Lenat and his team added terms (facts and concepts) to Cyc's ontology and explained the relationships between them via rules. By 2017, the team had 1.5 million terms and 24.5 million rules. Yet Cyc is still nowhere near achieving general intelligence. Doug Lenat made the representation of common-sense knowledge in machine-interpretable form his life's work
Alan Kay's speech at Doug Lenat's memorial

1985: Robotics loop closing (Rodney Brooks, Raja Chatila) – if a robot sees a landmark a second time it can tighten up on uncertainties

1985: Origin of behavioural based robotics. Rodney Brooks wrote "A Robust Layered Control System for a Mobile Robot", in 1985, which appeared in a journal in 1986, when it was called the Subsumption Architecture. This later became the behavior-based approach to robotics and eventually through technical innovations by others morphed into behavior trees.

This has lead to more than 20 million robots in people’s homes, numerically more robots by far than any other robots ever built, and behavior trees are now underneath the hood of two thirds of the world’s video games, and many physical robots from UAVs to robots in factories.

1986: Marvin Minsky publishes "The Society of Mind". A mind grows out of an accumulation of mindless parts.
1986: David Rumelhart, Geoffrey Hinton, and Ronald Williams published a paper Learning Representations by Back-Propagating Errors, which re-established the neural networks field using a small number of layers of neuron models, each much like the Perceptron model. There was a great flurry of activity for the next decade until most researchers once again abandoned neural networks.

1986: Perhaps the most pivotal work in neural networks in the last 50 years was the multi-volume Parallel Distributed Processing (PDP) by David Rumelhart, James McClellan, and the PDP Research Group, released in 1986 by MIT Press. Chapter 1 lays out a similar hope to that shown by Rosenblatt:
People are smarter than today's computers because the brain employs a basic computational architecture that is more suited to deal with a central aspect of the natural information processing tasks that people are so good at. ...We will introduce a computational framework for modeling cognitive processes that seems… closer than other frameworks to the style of computation as it might be done by the brain.

Rumelhart and McClelland dismissed symbol-manipulation as a marginal phenomenon, “not of the essence human computation”.
1986: The term Deep Learning was introduced to the machine learning community by Rina Dechter in 1986

1987: Chris Langton instigated the notion of artificial life (Alife) at a workshop11 in Los Alamos, New Mexico, in 1987. The enterprise was to make living systems without the direct aid of biological structures. The work was inspired largely by John Von Neumann, and his early work on self-reproducing machines in cellular automata.

1988: One of Hinton's postdocs, Yann LeCun, went on to AT&T Bell Laboratories in 1988, where he and a postdoc named Yoshua Bengio used neural nets for optical character recognition; U.S. banks soon adopted the technique for processing checks. Hinton, LeCun, and Bengio eventually won the 2019 Turing Award and are sometimes called the godfathers of deep learning.

Late 1980s: The market for expert systems crashed because they required specialized hardware and couldn't compete with the cheaper desktop computers that were becoming common

1989: “Knowledge discovery in databases” started as an off-shoot of machine learning, with the first Knowledge Discovery and Data Mining workshop taking place at an AI conference in 1989 and helping to coin the term “data mining” in the process

1989: “Fast, Cheap, and Out of Control: A Robot Invasion of the Solar System”, by Rodney Brooks and Anita Flynn where we had proposed the idea of small rovers to explore planets, and explicitly Mars, rather than large ones that were under development at that time

1991: Rodney Brooks published “Intelligence without Reason”. This is both a critique of existing AI being determined by the current state of computers and a suggestion for a better way forward based on emulating insects (behavioural robotics)
1991: Simultaneous Localisation and Mapping (SLAM) Hugh Durrant-Whyte and John Leonard: symbolic systems replaced with geometry with statistical models of uncertainty ( used in self-driving cars , navigation and data collection from quadcopter drones, inputs from GPS )

1997: IBMs Deep Blue defeats world chess champion Gary Kasparov
1997: Soft landing of the Pathfinder mission to Mars. A little later in the afternoon, to hearty cheers, the Sojourner robot rover deployed onto the surface of Mars, the first mobile ambassador from Earth
Early 2000s: new symbolic-reasoning systems based on algorithms capable of solving a class of problems called 3SAT and with another advance called simultaneous localization and mapping. SLAM (Simultaneous Localisation and Mapping) is a technique for building maps incrementally as a robot moves around in the world

2001: Rodney Brooks company iRobot, on the morning of September 11, sent robots to ground zero in New York City. Those robots scoured nearby evacuated buildings for any injured survivors that might still be trapped inside.

2001-11: Packbot robots from irobot were deployed in the thousands in Afghanistan and Iraq searching for nuclear materials in radioactive environments, and dealing with road side bombs by the tens of thousands. By 2011 we had almost ten years of operational experience with thousands of robots in harsh war time conditions with human in the loop giving supervisory commands

2002: iRobot (Rodney Brooks company) introduced the Roomba
2005: The DARPA (Defense Advanced Research Projects Agency) Grand Challenge was won by Stanford Driverless car by driving 211 km on an unrehearsed road

2006: Geoffrey Hinton and Ruslan Salakhutdinov, published "Reducing the Dimensionality of Data with Neural Networks", where an idea called clamping allowed the layers to be trained incrementally. This made neural networks undead once again, and in the last handful of years this deep learning approach has exploded into practicality of machine learning

2009: Foundational work on neurosymbolic models is (D’AvilaGarcez,Lamb,& Gabbay,2009) which examined the mappings between symbolic systems and neural networks

2010s: Neural nets learning from massive data sets

2011: A week after the tsunami, on March 18th 2011, when Brooks was still on the board of iRobot, we got word that perhaps our robots could be helpful at Fukushima. We rushed six robots to Japan, donating them, and not worrying about ever getting reimbursed–we knew the robots were on a one way trip. Once they were sent into the reactor buildings they would be too contaminated to ever come back to us. We sent people from iRobot to train TEPCO staff on how to use the robots, and they were soon deployed even before the reactors had all been shut down.

The four smaller robots that iRobot sent, the Packbot 510, weighing 18kg (40 pounds) each with a long arm, were able to open access doors, enter, and send back images. Sometimes they needed to work in pairs so that the one furtherest away from the human operators could send back signals via an intermediate robot acting as a wifi relay. The robots were able to send images of analog dials so that the operators could read pressures in certain systems, they were able to send images of pipes to show which ones were still intact, and they were able to send back radiation levels. Satoshi Tadokoro, who sent in some of his robots later in the year to climb over steep rubble piles and up steep stairs that Packbot could not negotiate, said⁠3 “[I]f they did not have Packbot, the cool shutdown of the plant would have [been] delayed considerably”. The two bigger brothers, both were the 710 model, weighing 157kg (346 pounds) with a lifting capacity of 100kg (220 pounds) where used to operate an industrial vacuum cleaner, move debris, and cut through fences so that other specialized robots could access particular work sites.
But the robots we sent to Fukushima were not just remote control machines. They had an Artificial Intelligence (AI) based operating system, known as Aware 2.0, that allowed the robots to build maps, plan optimal paths, right themselves should they tumble down a slope, and to retrace their path when they lost contact with their human operators. This does not sound much like sexy advanced AI, and indeed it is not so advanced compared to what clever videos from corporate research labs appear to show, or painstakingly crafted edge-of-just-possible demonstrations from academic research labs are able to do when things all work as planned. But simple and un-sexy is the nature of the sort of AI we can currently put on robots in real, messy, operational environments.

2011: IBM’s Watson wins Jeopardy

2011-15: Partially in response to the Fukushima disaster the US Defense Advanced Research Projects Agency (DARPA) set up a challenge competition for robots to operate in disaster areas

The competition ran from late 2011 to June 5th and 6th of 2015 when the final competition was held. The robots were semi-autonomous with communications from human operators over a deliberately unreliable and degraded communications link. This short video focuses on the second place team but also shows some of the other teams, and gives a good overview of the state of the art in 2015. For a selection of greatest failures at the competition see this link.

2012: Nvidia noticed the trend and created CUDA, a platform that enabled researchers to use GPUs for general-purpose processing. Among these researchers was a Ph.D. student in Hinton's lab named Alex Krizhevsky, who used CUDA to write the code for a neural network that blew everyone away in ImageNet competition, which challenged AI researchers to build computer-vision systems that could sort more than 1 million images into 1,000 categories of objects

AlexNet's error rate was 15 percent, compared with the 26 percent error rate of the second-best entry. The neural net owed its runaway victory to GPU power and a "deep" structure of multiple layers containing 650,000 neurons in all.
In the next year's ImageNet competition, almost everyone used neural networks.

2013-18: Speech transliteration systems improve and proliferate – we can talk to our devices

2014: Google program had automatically generated this caption: “A group of young people playing a game of Frisbee”. (reported in a NYT article)
2015: LeCun, Bengio, Hinton (LeCun 2015)
Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

2015: Diffusion models were introduced in 2015 as a method to learn a model that can sample from a highly complex probability distribution. They used techniques from non-equilibrium thermodynamics, especially diffusion. Diffusion models have been commonly used to generate images from text. Still, recent innovations have expanded their use in deep-learning and generative AI for applications like developing drugs, using natural language processing to create more complex images and predicting human choices based on eye tracking.
2016: Google's AlphaGo AI defeated world champion Lee Sedol, with the final score being 4:1.
2017: In one of Deep Mind’s most influential papers “Mastering the game of Go without human knowledge”,the very goal was to dispense with human knowledge altogether, so as to “learn, tabularasa, superhuman proficiency in challenging domains”(Silveretal.,2017).
(this claim has been disputed by Gary Marcus)

2017-19: New architectures, such as the Transformer(Vaswanietal.,2017) developed, which underlies GPT-2(Radfordetal.,2019)

2018: Behavioural AI:
Blind cheetah robot climbs stairs with obstacles: visit the link then scroll down for the video

2019: Hinton, LeCun, and Bengio won the 2019 Turing Award and are sometimes called the godfathers of deep learning.
2019: The Bitter Lesson by Rich Sutton, one of founders of reinforcement learning.
The biggest lesson that can be read from 70 years of AI research is that general methods thatleverage computation are ultimately the most effective, and by a large margin…researchers seek to leverage their human knowledge of the domain, but the only thing that matters in the long run is the leveraging of computation.…the human-knowledge approach tends to complicate methods in ways that make them less suited to taking advantage of general methods leveraging computation.
(This analysis is disputed by Gary Marcus in his hybrid essay)

2019: Rubik’s cube solved with a robot hand: video

2020: Open AI introduces GPT3 natural language model which later spouts bigoted remarks

2021: DALL-E images from text captions

2022: Text to images
Stable Diffusion is a deep learning, text-to-image model released in 2022 based on diffusion techniques. It is considered to be a part of the ongoing artificial intelligence boom. It is primarily used to generate detailed images conditioned on text descriptions.

Stable Diffusion is a latent diffusion model, a kind of deep generative artificial neural network. Its code and model weights have been released publicly, and it can run on most consumer hardware equipped with a modest GPU with at least 4 GB VRAM. This marked a departure from previous proprietary text-to-image models such as DALL-E and Midjourney which were accessible only via cloud services.

2022, November: ChatGPT is a chatbot and virtual assistant developed by OpenAI and launched on November 30, 2022. Based on large language models (LLMs), it enables users to refine and steer a conversation towards a desired length, format, style, level of detail, and language. Successive user prompts and replies are considered at each conversation stage as context.

ChatGPT is credited with starting the AI boom, which has led to ongoing rapid investment in and public attention to the field of artificial intelligence (AI). By January 2023, it had become what was then the fastest-growing consumer software application in history, gaining over 100 million users and contributing to the growth of OpenAI's current valuation of $86 billion.