educators should help design languages for coding

off topic aside: dear wordpress, please move it back to the left. thats terrible over there! how many websites do you know of that put stuff like that on the right? ive right-justified this paragraph to exaggerate how absurd the move is. oh… and now everything on the reader is off-center, why?


amidst the idea that coding is just about teaching logic (and the suggestion that we put more time into the abstract than the functional,) lets talk about the benefits of actual coding.

for many educators, they already struggle with the problem of connecting “how are we going to use this?” with coding. incredibly, they already know one of the answers: coding is great for teaching people to problem-solve using logic.

its also good for getting people to finally “make friends” with their computer. instead of trying to point at stuff to make it do things all day, coding brings the computer in as a real ally and workhorse. instead of telling an application to talk to the computer, youre talking to it yourself!  coding is the shortest and straightest route to digital literacy.

but even though it helps solve (or even prevent) computerphobia, the fact that some educators are mildly computerphobic– and notice it in their students– leads them to say things like “wait! this is good for teaching logic– cant we just teach logic instead?”

now youve not only thrown out the bathwater and baby, but the bathtub as well! and youve thrown out the tool that was so great for teaching logic– even with practical everyday applications.

dont forget, coding actually produces something. it may not be the next version of windows or even the next popular video game, but it has results that people can look at and even reuse– how many school assignments are like that, really? theres only so much room on the front of the icebox.

laura has the right idea: make tools easier to teach with, rather than shy away from using any actual computer tools.

thats also the idea behind the fig language– but this isnt about just fig. fig was intended as an example; i also use it, but fig was meant to be one tool that makes it easier to learn coding. there are several!

nonetheless, fig is a showcase of ideas that can make coding more forgiving, and easier to learn. it even throws one of those ideas away– in basic, if you ask for the value of “x” and x wasnt set, it gives you 0.

python and javascript will give you an error instead; very few languages have ever given you 0, and even modern(ized) users of basic will discourage it or use a compiler option to make that give an error.

i could have made fig return a 0 for unset variables, but some error messages are actually helpful. heres what fig does instead of giving you 0:

each_line_starts_with_a_variable  “hello world”  ucase  print


a few special commands (block commands mostly) start without a variable, but other than that its a standard in fig. and naming the variable at the beginning of the line sets it to 0, but you have to actually name it before you can use it. then you can do all you want with it on that line, and the value persists– until you start a line with it again. if you want variables to persist, switch to another one after using them:

x 37

y 28

now x print

now y print

x # now x is 0 again


what it doesnt do is let you name a variable later on in the line, if you havent used it already. then you still get an error:

fig 4.1, jan 2017 mn

1 now p print

error: variable or function not created, but referenced… “p” needs to be set before first use
error in line 1:
now p print


why do you want an error? the same reason you want your spellchecker to underline thisword in red. its trying to help!

writing helpful error messages is an art (not one im claiming mastery of, either.) most error messages will presume you already know enough about what youre doing that youll at least understand the error message. it would not be impossible to make a system for coding where the error messages would offer to teach you how to do anything properly that you did wrong.

if theres room on the screen, you could offer a simple example of something that works instead. you could even offer an interactive tutorial. however, the more information you add to an error, the more someone might think “oh, no! look at all that– this is really bad!”

a kernel panic really is bad:

for one thing, it means anything you havent saved… however after you turn it off and reboot, it will probably be alright otherwise. an average user might get a kernel panic once or twice a year (if that,) unless your electric lines are noisy.

when youre programming, most error messages mean “you typed something in wrong.” thats ok! find it and fix it. it will often tell you where, and figuring it out is part of coding. it also teaches you one of the most valuable applications of that logic they want you to learn: debugging.

of course, once youre not afraid of error messages, youve overcome one of the things that the average user lives in fear of– a message that says “hey, somethings wrong!” and they dont know what it is. and that is a great reason to learn coding!

but educators shouldnt throw away these tools– by all means, add more. but its rare to offer such a strong connection between learning concepts and applying them as there is with computing. dont squander it! use your voice– online or with computer-savvy associates– to talk about what would make the coding experience easier, without throwing the whole opportunity away.



understanding coding through other computer tasks

one of my goals as an “educator” is to always have an answer for someone who says this sort of thing– even while literally standing in front of a laptop, doing computer tasks (as they were today.)

  • “im not very ‘computery.'”
  • “im computer illiterate.”
  • “i dont really get computers, i just use them.”
  • “i would never learn/understand/get coding.”


i had a pretty nice conversation this morning where i explained that when you click on a menu, basically the computer is presenting you with options and youre selecting one by clicking on it– and that when you click, youre basically telling the computer to do a thing based on what item you select. ok, thats obvious.

then i said “coding is a lot like that, except instead of clicking an option, you basically tell the computer ‘do this thing.” for most people, the difference is “but its a lot more complicated than that!” but heres the thing– not always! and it doesnt have to be.

i tell people that in the 80s, it was easy for kids to learn to code. it wasnt part of the curriculum in every school, it was more of an extra-credit thing. but i took a computer class in high school. i didnt need it, but it was fun and exposed me to a few additional ideas– not to mention a class full of kids that were older (i was probably the only freshman, ta-da) and one guy was learning how to code in c. (i was impressed. i think he also gave me a boot sector virus, not necessarily on purpose… but i found it and removed it. no harm done.)

in the 90s, i tell people– it changed from teaching computers to teaching applications. this is a bit like taking swim lessons and only learning to dog-paddle because it works, and then switching people to a new sort of pool every 5-10 years so that they need to learn a new way to swim each time. and you can see the results everywhere– people that arent “computery,” using computers and often feeling helpless or at least frustrated.

im not saying everyone needs to be a computer enthusiast, but literacy is literacy, and “training” is training. and training results in literacy far less often. so when people say “everyone should learn to code,” they arent saying everyone should become a software engineer, or work for google. or get rich making video games.

touching quickly on actual coding, i wrote fig to make it easy to use and demonstrate these semi-universal programming concepts:

  • variables
  • input
  • output
  • basic math
  • loops
  • conditions
  • functions


i also wrote a little story in 6 parts called “the robot hypnotist” to explain them:

many of these can also be explained using “everyday” computer tasks. many or all of these tasks could be familiar to you, some might not. lets see what we can accomplish:


  • variables

variables are an easy one. a variable is basically a piece of data with a name. when the program wants to keep track of something, it holds it in “memory.” two of the most important parts of the computer are the cpu– where the actual tasks are performed (its like the pencil in a ridiculously basic metaphor) and memory, (which is like the paper.) depending on how old your computer is, (say, 35 years old for example) your computer may save parts of memory to a floppy disk, or not at all. now it has a hard drive, or flash (which works more or less like a hard drive.)

a variable is just a piece of data with a name. in terms of the computer thinking the way a computer does, the computer:

  • puts the name itself in memory
  • puts the data its linked to in memory
  • puts a number representing WHERE in memory the data is, somewhere relative to where the name is stored.

so perhaps you create two variables called “name” which holds a persons name, and “mobile” which holds a phone number.

for each variable, the computer stores the words “name” and “mobile” in memory. when you ask for those variables, it looks up each– the name becomes a number, the number points to part of memory, and it goes to that part and gets the data that goes with the variable name.

so what everyday task is like setting a variable? creating a small file on your computer!

open your word processing program, type in a phone number, and save it as “phone.doc” or “phone.odt” or just type in “phone” and this is like creating a variable in programming/coding.

instead of being stored in memory, it will be stored first in memory (like most programs, the word processing program uses variables, it just doesnt tell you) and then it will be copied from the memory to a file on the hard drive. but its very similar.

variables are useful to coders because its usually easier to juggle a handful of names than a handful of numeric memory locations. heres a memory location:


thats a short way (theres a story there) of saying: 3074464260. my question to you is, would you rather think about the numeric memory address 3074464260, or would you rather think about a variable called “name”?

so thats one of the main reasons that coders use variables.

theres another thing that works that way too: the internet domain name system. you type in, and your browser looks it up using an internet phone-book system thats called “dns.” just like the computer itself, the internet has a numeric address (called an ip number) for every resource currently attached to the internet. (thats pretty amazing.) so right this moment, if i type in into the web browser, it uses the numeric addresses i have in my dns settings to access the dns servers, which then take the query “” and give this number back:


thats where they have attached to the internet as i type this! but since is huge, it has many numeric entries. a smaller website could have the same ip address for months or even years, but running another dns query i get:


has already moved? no. because if  i go to the browser and give it the other ip: it still takes me to (although the page it takes me to is which tells me: “The address cannot be registered. Site name must be at least 4 characters. But you can sign up and choose another one.”)

but the number still points to! the day after i post this? its possible that all the numbers will be different. but a dns query for “” will stay updated (this is called “dns propagation,” but it just means “updating the addresses”) until automattic (the company that brings you wordpress) stops paying to re-register their domain– kind of like if you stopped paying to keep the same phone, eventually you would be removed from the phone book.


  • input

its difficult to come up with a metaphor for input that isnt itself input. when you let your computer go to a lock screen (or you suspend or lock the computer) you know it will come up with a place for you to type in your password. but when it asks you, thats a program that uses input to collect what you type in.

when you move the mouse, thats input. when you press keys, thats input. youre doing input all the time, but rarely do you ever do anything (other than lock the computer, or open a new file in an editor) that tells the computer “wait for input.” the computer is almost always waiting for input. when you dont have any other program specifically waiting for input, theres a program called the “shell” that is waiting for input. its the thing you type commands into, or the thing that lets you click icons on the desktop. whether its graphical or text-based, its the “shell” that allows you to tell the computer to run your software.

and you can write your own shell. you can even write a program that watches what files you open in your browser, so that you can make a “website” (just for your own computer, even if its not on the internet) to where you have pictures of say– your word processor. and if you click on that picture (or icon,) the program watching your web browser will open your word processor for you. that would be a shell.

having it in the web browser however, means that it might be possible for a clever person to create their own website on the internet, which if you went to it it might be able to say, open your word processor. they probably wouldnt be able to tell your word processor to do anything else, but you really dont want to give a website that much control. this is why even though i did create a “website” for my computer once, that let me open programs just by clicking on pictures or links, i never use that on a daily basis.

i have written other shells that are a better idea than that. in fact, there is now a polish version of a shell for dos and windows i wrote in 2006, which is still online and available for download. (im very flattered by this, even though its a very small program.) i used that shell for a very long time, instead of using the start menu in windows.


  • output

like input, so many things you tell the computer to do are output. the nature of a shell is input/output, which is another way to say “a shell is an interactive program.” it keeps taking input, and keeps giving output. you press a key, the letter or number or punctuation shows up on the screen (usually. a keyboard shortcut might not– something else will likely happen instead.) but if you open a document and print a file, this is output. if you play a video, or even just music, this is output. in coding, an output command can be as simple as print “hello world!”

in fig, you have to set a variable first, then the print statement will print whatever the variable is set to:

x ; “hello world” ; ucase ; print


that prints the contents of the variable x, which is holding the string “hello world”, which the ucase function made all-upper-case. but each command does something to or with the variable x:

  • x is the variable name
  • “hello world” puts exactly that text into x
  • ucase takes the text in x, makes it upper-case, then puts the new version in x
  • print takes the data stored in x and puts it on the screen
  • it works even without the semicolons: x “hello world” ucase print


i could also open a new file, name it x.doc, type “hello world” into the word processor, select all the text, and use a menu option to make it all uppercase. then i could open the file to put it on the screen. but whatever.


  • basic math

you could just open a calculator program. but the computer is constantly dealing with numbers– thats why its called “computer” (a fancy name for “calculator.” actually before computing machines, a “computer” was a person that did lots of tedious math on paper, and/or using a slide rule, etc. …this position is not entirely obsolete: an “accountant” is a bit like a “computer” that focuses on financial data, although they may use more modern tools and have to keep track of some fairly esoteric rules about the sorts of things they do.)

when you click on a window and move it, youre doing math. youre moving the arrow which is represented by a number of dots from the top of the screen– so by just moving the mouse up, youre subtracting from the “y” value of the picture of the arrow. (a touchscreen works based on a similar idea. it ultimately produces numeric data.) as it moves sideways, the “x” value changes.


  • loops

if you select a handful of icons and click “open” on all of them at once, you are basically telling the computer to “iterate” or “loop over” the items you have selected. in fig you can have a group of words in a string:

x “hello there, how are you?”


and split the the string into words, based on the fact that theres a space between them:

x ; “hello there, how are you?” ; split x ” “ #### now x is a group of 5 words– just go with it


then you can loop over each one, just like when the computer opens all the files (icons) you selected:

forin word x
now word print

that will put each “word” on the screen, one after the other:


ive been doing this for 30 years, and i still think its a little freaky that a machine can process words that easily. (of course, it has no idea what they mean unless you tell it what they mean. and thats just more letters to the computer.)

i shouldve mentioned this in the “math” part, but the computer stores letters as numbers too. the letter “e” in hello for example: is the number 101. lots of things can be the number 101, but to store “e” as 101 you have to use a thing called “encoding” to translate things into numbers. two important encodings associate “e” with 101: ascii encoding, and unicode (which actually starts with ascii and keeps going. so the unicode for ascii 101 is unicode 101.)

ibm had an encoding scheme called “ebcdic” where “e” was 133. im a coder, and i dont usually keep track of these numbers. i know lowercase “a” is 96 (oops, no, its 97) but the way i find out is either looking it up on wikipedia (which i have no need of doing– except i dont have a quick command for ebcdic) or i use ord(“a”) in python or x “a” asc in fig. this gives me the number 97.


  • conditionals

the easiest way to demonstrate a conditional in everyday computing tasks is to setup something to run at a certain time– like scheduling a blog post.

scheduling can be thought of (and implemented as) a loop that says something like:

“is it wednesday, at or after 1:30:00 pm?”

“if no, do nothing.”

“if yes, do the thing that was scheduled for that time.”

and then keep looping until the answer is “yes.” then stop looping (or keep looping, if there are any other times scheduled. or just in case times are added.)


  • functions

a function is a name attached to some code. whoa! thats so simple, that people get confused by it.

if you attach data to a name, thats a called a “variable” and the variable is “referenced” when you use the variable later.

if you attach program code to a name, thats called a “function” (or if the function is attached to an object, its called an “object method” as objects have to make everything so very special…) and the function is “called” when you use the function later. this is just terminology– if youre “calling” a function, you are using it. if you “reference” a variable, you are using it.

how often do you call a function? well for example, every time you click on a menu– whether its the start menu or the little menu at the top of some programs (theyre getting more and more rare, but the “ribbon menu” counts too.) then each of these things you do ultimately call a function.

opening a program by clicking on an icon, is also like calling a function. in fact everything the computer does is generally arranged into “functions.” in the old days, function calls were known as “routines” or subroutines. there are ways to distinguish between these concepts, but the distinguishing characteristics are small.

you can call a function that doesnt return a value a  “sub” and otherwise you can call it a “function.” or you can do like i do, and use the word “function” for both. in fig, functions are created (“defined”) using the function command (this name comes from basic. it also exists in other languages.) in python, you define a function using the def command.


this is essentially what coding is!



Why Our Kids Must Learn to Code | Mark Heninger

i reblogged this over my post on writing a programming language:

i think its really important for people to think about this– some of you already know how to code. its good to know why you should help other people get there, if you can. and do check out the original post, even if youve read most of it here. i want you to check out the blog its on, if youve got a minute to do so.

Free Campus

Why Our Kids Must Learn to Code | Mark Heninger

Find the beauty, prose and voice in Code. We all should know code as well as we know our own language. What do you think?

Video link:

Free Campus link:

View original post

why learn computer programming

i read this article in 2006. im reblogging because theres no place for comments (on a science blog, theres no place for comments? how un-science-friendly.)

my answer:

and a solution:

note that microsoft did create a sort of “basic” dialect years after this article was written, called small basic. fig, by comparison, is arguably closer to the sort of language that basic was in the 70s and 80s.


Why Johnny can’t code

BASIC used to be on every computer a child touched — but today there’s no easy way for kids to get hooked on programming.
Sept. 14, 2006

Author: David Brin, Salon magazine

For three years — ever since my son Ben was in fifth grade — he and I have engaged in a quixotic but determined quest: We’ve searched for a simple and straightforward way to get the introductory programming language BASIC to run on either my Mac or my PC.

Why on Earth would we want to do that, in an era of glossy animation-rendering engines, game-design ogres and sophisticated avatar worlds? Because if you want to give young students a grounding in how computers actually work, there’s still nothing better than a little experience at line-by-line programming.

Only, quietly and without fanfare, or even any comment or notice by software pundits, we have drifted…

View original post 2,889 more words

apps vs. code

an app is made of code. you can code an app that makes:

  • apps from code: (text -> gui)
  • code from apps (apps that manage or produce code as text)
  • apps from apps (apps that manage or produce code graphically)
  • code from code (automatic translation from one kind of text to another. this is what “compilers” do.)


creating an app, whether you “write code” or not, is a useful and educational experience. it can teach you important things about apps, and code.

in my opinion, people should definitely learn to code if possible; even if they never design an app. designing an app is a great idea, too– but there is no substitute for the knowledge that just letters and numbers and punctuation (really just numbers) can make absolutely anything “digital” (just look at that word) happen.

computers translate things into numbers, and numbers into EVERYTHING that computers do. they are magic calculators that change the world. they can make music, video, images, organize, communicate (and publish) globally, teach, inspire, (and also do math!) an app is a product: code is a domain. accept no substitutes (but appreciate the applications, and the simplicities, of both.)