(series) how to design a programming language: even if youve never coded

the myth of the non-coder

 

the non-coder is a mythological beast, handed down from quaint old wives tales. people generally believe by age 10 that they are either a “computer person” or “not a computer person.” for an idea of what a computer person looks like, look up jack black as “computerman” on youtube.

the myth of the non-coder is reinforced by the myth of the coder. the coder is a person who was born understanding how to code, who has never heard of “hello, world” and who has never looked up a command more than once.

which isnt to say that coders and non-coders dont exist, but they exist almost entirely in mythology: made-up rules that are loosely based on reality.

a coder is someone who has coded. at least this is how it used to be. today, a coder is someone who has mastered coding! but this is another mythology; no matter how well you “master coding” there are more horizons and more things to learn.

perhaps you should hold off before telling people you are a “master coder.” but watch out for the many fanatics who will insist that until you are a master, that you are not a coder. from there, you could spend the rest of your life trying to prove something to people that arent paying attention.

the way i learned coding was typing examples out of a book, noting what they did, and learning the commands by comparing what happened to what i typed. i learned that print put words on the screen by typing in code that included the print command, and noting that it put words on the screen. you can also learn that print puts words on the screen by reading that it does that.

actually coding is a more interesting experience than just learning it from a book– although a book can help a lot. as you type in code, you go from noting what happens to predicting what you think will happen, to writing code that does what you want. though you still have to try it to be sure it does what you think it will– then repeat the process with changes and fixes.

 

 

ive never cared much for syntax highlighting

code-1839406_960_720

 

syntax highlighting is meant to make it easier to read and manage code. its not intended as training wheels, but a tool for professionals and students alike.

i find it draws attention away– if only a small amount of it– from what im trying to do. there are also at least two problems with syntax highlighting, regarding the colours chosen: first, that the parts of syntax that need the brightest or most vibrant colours (that draw the most attention) follow the same logic for everyone. just for a random example, in the picture above the items in red– a colour used for “stop” and “warning” because it gets the most attention– are the tag names, and the equals sign. a more useful scheme to me would be if = was one colour, and == was a different colour (to make assignment vs comparison easier to spot.)

its different for every person. i dont need the tag names in red– theyre already marked with a chevron to the left. thats what im looking for to find tag names. yes, red certainly makes it easier to find the tags (or equals signs) but the whole time im coding, those tags now try to pull me away from the rest of the code. its not worth the tradeoff.

the other problem with colours chosen is that at any given time, what im really looking for in code is a specific group of things, and it changes based on context. so while im looking for whatever it is that im looking for, (it might not even be syntax) ive got all this colourful noise in the text im searching through. now picture a needle in a haystack: do you want to look for that silver needle in a bale of ordinary blonde hay, or would it be easier if the hay were eight or nine different colours?

i find highlighting very useful for search results– especially multiple results displayed at once:

leafpad

 

but i only use the search feature if a quick look over the page doesnt suffice, and syntax highlighting only gets the way.

im inclined to think that syntax highlighting is something a lot of people just “put up with” because its there, and its “too much bother” to turn off. (most people hate configuring anything in general, unless they need to and they cant.) and im sure its useful at least for most of the people that make it a feature in the editors theyre writing. it would be funny if that werent the case– if everyone thought that everyone else preferred highlighting, so developers that didnt want or need it assumed they had to add it for the users, while users assumed there was benefit (because why else have the feature?)

its even possible that syntax highlighting is a fad, but im sure a lot of people either love it, or think they do. i gravitate towards editors that dont have it, and i will turn it off if i have to.

 

 

educators should help design languages for coding

off topic aside: dear wordpress, please move it back to the left. thats terrible over there! how many websites do you know of that put stuff like that on the right? ive right-justified this paragraph to exaggerate how absurd the move is. oh… and now everything on the reader is off-center, why?

 

amidst the idea that coding is just about teaching logic (and the suggestion that we put more time into the abstract than the functional,) lets talk about the benefits of actual coding.

for many educators, they already struggle with the problem of connecting “how are we going to use this?” with coding. incredibly, they already know one of the answers: coding is great for teaching people to problem-solve using logic.

its also good for getting people to finally “make friends” with their computer. instead of trying to point at stuff to make it do things all day, coding brings the computer in as a real ally and workhorse. instead of telling an application to talk to the computer, youre talking to it yourself!  coding is the shortest and straightest route to digital literacy.

but even though it helps solve (or even prevent) computerphobia, the fact that some educators are mildly computerphobic– and notice it in their students– leads them to say things like “wait! this is good for teaching logic– cant we just teach logic instead?”

now youve not only thrown out the bathwater and baby, but the bathtub as well! and youve thrown out the tool that was so great for teaching logic– even with practical everyday applications.

dont forget, coding actually produces something. it may not be the next version of windows or even the next popular video game, but it has results that people can look at and even reuse– how many school assignments are like that, really? theres only so much room on the front of the icebox.

laura has the right idea: make tools easier to teach with, rather than shy away from using any actual computer tools. https://codeboom.wordpress.com/2017/01/07/gui-zero-making-python-guis-really-simple/

thats also the idea behind the fig language– but this isnt about just fig. fig was intended as an example; i also use it, but fig was meant to be one tool that makes it easier to learn coding. there are several!

nonetheless, fig is a showcase of ideas that can make coding more forgiving, and easier to learn. it even throws one of those ideas away– in basic, if you ask for the value of “x” and x wasnt set, it gives you 0.

python and javascript will give you an error instead; very few languages have ever given you 0, and even modern(ized) users of basic will discourage it or use a compiler option to make that give an error.

i could have made fig return a 0 for unset variables, but some error messages are actually helpful. heres what fig does instead of giving you 0:

each_line_starts_with_a_variable  “hello world”  ucase  print

 

a few special commands (block commands mostly) start without a variable, but other than that its a standard in fig. and naming the variable at the beginning of the line sets it to 0, but you have to actually name it before you can use it. then you can do all you want with it on that line, and the value persists– until you start a line with it again. if you want variables to persist, switch to another one after using them:

x 37

y 28

now x print

now y print

x # now x is 0 again

 

what it doesnt do is let you name a variable later on in the line, if you havent used it already. then you still get an error:

fig 4.1, jan 2017 mn

1 now p print

error: variable or function not created, but referenced… “p” needs to be set before first use
error in line 1:
now p print

 

why do you want an error? the same reason you want your spellchecker to underline thisword in red. its trying to help!

writing helpful error messages is an art (not one im claiming mastery of, either.) most error messages will presume you already know enough about what youre doing that youll at least understand the error message. it would not be impossible to make a system for coding where the error messages would offer to teach you how to do anything properly that you did wrong.

if theres room on the screen, you could offer a simple example of something that works instead. you could even offer an interactive tutorial. however, the more information you add to an error, the more someone might think “oh, no! look at all that– this is really bad!”

a kernel panic really is bad: https://en.wikipedia.org/wiki/Kernel_panic

for one thing, it means anything you havent saved… however after you turn it off and reboot, it will probably be alright otherwise. an average user might get a kernel panic once or twice a year (if that,) unless your electric lines are noisy.

when youre programming, most error messages mean “you typed something in wrong.” thats ok! find it and fix it. it will often tell you where, and figuring it out is part of coding. it also teaches you one of the most valuable applications of that logic they want you to learn: debugging.

of course, once youre not afraid of error messages, youve overcome one of the things that the average user lives in fear of– a message that says “hey, somethings wrong!” and they dont know what it is. and that is a great reason to learn coding!

but educators shouldnt throw away these tools– by all means, add more. but its rare to offer such a strong connection between learning concepts and applying them as there is with computing. dont squander it! use your voice– online or with computer-savvy associates– to talk about what would make the coding experience easier, without throwing the whole opportunity away.

 

 

when programmers get it right (…the first time)

chances are, im going to say a number of things that that author of the blog entry im critiquing will readily agree with. either way, there are so many things you could say about it.

if this sounds like a defense of programmers, or bugs, its far from it. but lets start with this tagline from the blog itself:

Technology should work for you. No excuses!

the core of my complaint is part of the reason i cant stand most peoples “solutions” in the first place– and there are millions of people living in similar circumstances. we (such people) have sufficient understanding of the underlying “thing” that– while most of us are far from the scientists a casual observer may have us pegged for– we can moan about the shortcomings of our tools and sometimes even fix them!

therefore, this is not just a critique for its own sake. but lets get to a quote of yesterdays entry itself:

Technology makes our lives better to the extent we don’t have to think about it.

well, sure. i mean if i had to write a pdf viewer (or drive to the library) every time i wanted to read a book, i would probably read fewer pdfs, or books. its easy to dismiss this as laziness, but the bottom line is: putting such things into our periphery lets us pay attention to other, hopefully more important things. thats productivity in a nutshell, and a lot of things designed for that purpose have the opposite effect. they turn us into this poor guy, focused on the direct drudgery of the thing itself:

metropolis1927_002779_enhance-enhance

by the way, apple (i mean xerox) pioneered that, and its called a direct manipulation interface: https://en.wikipedia.org/wiki/Direct_manipulation_interface (sorry, a little mild cynicism there.)

the blog author, “cxwinnectaland” is actually talking about the example of a pdf viewer they use on a regular basis:

“For the last two years, I have been using Google Chrome to view PDFs at work.  Dozens a day, usually with at least five open at a time.  For the last two years, every time I have rotated the view clockwise or counterclockwise, my page has shifted off-screen and I have had to scroll to get back to it.”

whats funny is that this blog starts about talking about how technology is best when we dont have to think about it– i would counter-argue that technology is best when it lets us put the thought of it aside most of the time– thats a little bit different. because when you choose a tool, you might choose it based on first impressions…

“oh, this is easy to use and fairly good looking…” (a reasonable impression)

“oh, suddenly a common task is easier! im being productive!” (building a preference)

later on:

“hey, its being stupid! im just going to square-peg this into my workflow!”

the point is, this blogger is talking about how great it is that they finally fixed the little annoying “doo-dad” where to make the everyday viewer work properly, the tool has distracted the user with an anti-feature/tedious need for a kludge/workaround. i sort of agree– fixing those things can make a world of difference.

but how about all the steps that made the user more likely to be helpless:

  • so many pdfs, when text and html exist and are better for viewing… with everyone using the wrong tool for the job, the user is expected to also.
  • a pdf viewer that isnt even a pdf viewer, meaning that fixing a small bug is a low priority (and introducing one is easy for the same reason: its not even a vital feature.)
  • countless layers of abstraction/absurdity: graphics layers implemented in the web browser, to show a vectorized implementation of simulated paper designed for printing, instead of an interface designed for simple text and graphics– which is actually what the web browser was before they bolted on the pdf viewer!

if someone just made a good pdf viewer, then it would be a top priority when it had a display bug. instead, we are bolting everything onto the browser like a swiss-army knife.

i appreciate that in computing, sometimes the swiss-army knife model isnt like its namesake: all the little parts arent inferior to the “full-size” stand-alone versions. i mean the way we make “stand-alone” devices is to take a general-purpose machine and hobble it until it can only do one thing. im certainly not in favor of that.

and lets not forget that the whole reason we know about the swiss-army knife is that sometimes its handy when a browser (or gadget) can do everything. im using it to write this instead of a word processor; dont think im unaware that this ancient laptop would perform a lot more nicely with libreoffice than this cloud-like thingy that wordpress has me typing in.

in some cases, i would use something else.

but thats my whole point here!

not thinking about technology actually is what most people do– and it is the true source of so much cost– using a crappy tool (or a good tool with crappy side effects) when there are countless other tools out there that work better, or are designed to do a job well– if the user accepts no responsibility for tool selection (and even has a choice, but simply doesnt bother) then not thinking about technology is the problem, and thinking about it is the solution.

while it is good for developers to fix these things, it couldve been fixed sooner by the user, or by a society that didnt decide that making a frozen, simulated printout into something dynamic was better than simply making a document that is itself dynamic– even in an application already designed around the latter idea.

we do so many bizarre, overcomplicated things because we dont think about how much nonsense we pile atop nonsense, its a complete wonder that devs can still fix anything.

but thats the magic of computing. if instead of reading text from a screen, youd rather read text from a window running in a compositor over a gui that shows a smooth scrolling portion of a document that is simulated using fourier transform algorithms to create an approximation of a actual printed document with margins and custom fonts that you may not like, so it gets loaded into a pdf library that reformats the page after extracting the text from the digital file that is more than a few orders of magnitude larger than a simple text file would be with a one line note as to what font should be used–

you can do all that, and you can put the developers through that level of absurdity. and to an extent, its really their fault for catering to such a sprawling design. i mean its fun to go overboard sometimes– we know, because we have fun using tools that way.

the thing is though, if you ever stepped back from the details that pop out of everyday use– “no excuses?” how about “youre doing it to yourself?” thats not an excuse, its a fact of life sometimes.

many years ago, plato said: “the unexamined life is not worth living.” somewhat more recently, arthur c. clarke said: “any sufficiently advanced technology is indistinguishable from magic.

i will bite off these to say: “the unexamined workflow will become indistinguishable from a horrifying pile of workarounds.” ok, so its not catchy.

real life is messy: but if you find yourself pooping where you drink, and you already have working sinks and toilets but youve simply decided to “change your workflow” in how you utilize them, maybe its time to stand back and think about your technology.

a lot of your gripes come from too many workarounds, and not enough thought about where the real problems are coming from.

this blog entry is part of a philosophical series where “computer types” talk about the world and everything wrong with humanity– i mean, users– i mean, computing. a favorite author of such pieces is andy mender, who wrote the most recent example (also this, this and this) that i can think of. cheers, andy. and devwrench: sorry for jumping all over it! its either a difference in personal philosophy, or i simply read it a lot differently than it was intended. hopefully a good point or two came out of the whole thing.

and to the user: i defend, even champion your right and room to do what works for you– but the odds of that happening are more likely if you have to consider the results and even some of the actual causes, now and again. youre not more likely to find the best tool for you if you always trust people selling one-size-fits-all.

 

  • this work is in the public domain.
  • …that probably even includes the picture.
  • …got it from here and edited it myself.

 

coding: math operators with strings

in basic, the way to concatenate strings could have gone like this:

p$ = concat$(a$, b$)

however, someone thought it was better to overload the addition + operator:

p$ = a$ + b$

this also works in python:

p = a + b

python took this a step further… arrays can also be concatenated:

p = a + b
# [‘a’, ‘b’] + [‘c’, ‘d’] = [‘a’, ‘b’, ‘c’, ‘d’]

and multiplied:

p = a * 2
# [‘a’, ‘b’] * 2 = [‘a’, ‘b’, ‘a’, ‘b’]

as can strings:

p = a * 2
# “allo” * 2 = “alloallo”

but thats about it.

there is a split function in python that can turn a string into an array:

“suppose you have this string”

and you want to split it by a space: ” ”

the syntax for that is:
“suppose you have this string”.split(” “)

this gives you the array:

[‘suppose’, ‘you’, ‘have’, ‘this’, ‘string’]

why doesnt it make sense to use divide for that? it probably does:

“suppose you have this string” / ” ” =
[‘suppose’, ‘you’, ‘have’, ‘this’, ‘string’]

what happens if you split “hello” with “e”?

“hello” / “e” = [‘h’, ‘llo’]

this is still what the split command in python does. we are just using division as shorthand, in the tradition of + for concatenation and * for string multiplication.

python lets you multiply an array by a number, but not an array by a string. but if we use division as shorthand for split (string -> array), what about its complement, join (array -> string)?

[‘h’, ‘llo’] * “e” = “hello”

nonsense, right? well, is this easier?

“e”.join([‘h’, ‘llo’])

thats a real python expression. if we can agree that division is an appropriate shorthand for split, then multiplication is an appropriate shorthand for join.

in basic, you can add “5” to “hello” though you cant add 5 to “hello.”

its a type issue. in js, typing is weak and you can add numeric 5 to string “hello”:

“hello5”

i think its better to return an error, in case a numeric is not expected.

python is dynamically typed, but not weakly typed. it lets the same variable hold a string or a numeric, but it doesnt let you add a string to a numeric:

TypeError: cannot concatenate str and int objects

so whether this should produce “hello5” depends on the type conventions of the language, imo.

if we are dividing “hello” with “there”, then “there” does not go into “hello” — since it cant divide it, the string should not be divided:

“hello” / “there” = “hello” (this is what split does in python as well.)

due to this convention, a string should not be able to divide itself. a number divided by itself would produce “1” — we can translate “1” as “one character”

“hello” / “hello” = “h” #### is this useful?

or as “one string”:

“hello” / “hello” = “hello”

5 / 5 = 1, but also: 1 (string) / 1 (string) = 1 (string.)

should we be able to divide a string by “” ? no, python doesnt allow that, and numbers dont either. whats the string equivalent of a divide by zero error? it already exists in python:

ValueError: empty separator

but suppose we got rid of this error. we could make “hello” / “” evaluate to:

* “hello”
* “hello” or “h”
* or “”

personally i would be inclined to say that “hello” / “” returns an error, or is “hello”

what do i base that on?

languages exist to allow the expression of ideas.

so i base it on the idea: “what would the average coder expect it to do?”

…within reason.

if you multiply “pete” times 5, you get: “petepetepetepetepete” right?

i mean thats what happens when you add “pete” to “pete” 4 times:

“pete”
“petepete”
“petepetepete”
“petepetepetepete”
“petepetepetepetepete”

so if you divide “petepetepetepetepete” by 5, what should you get?

“pete” yes, thats a logical answer.

but python can step through string bytes in a “for x in string” loop the way basic steps through numbers in a “for w = x to y step z” loop:

for x in “hello”: print x #### this is a lot like dividing a string.

for whatever reason, i think this would be useful:

“petepetepetepetepete” / 5 = [‘pete’, ‘pete’, ‘pete’, ‘pete’, ‘pete’]

and if you do “mid 1, 1” (in fig) on the resulting array:

“petepetepetepetepete” / 5 : mid 1 1

you get:

“pete”

its a thought. perhaps you like “petepetepetepetepete” / 5 = “pete” better. usefulness and expectations may go together here, or work against each other.

can you divide “hello there” by 20?

no. whether you go with:

“petepetepetepetepete” / 5 = [‘pete’, ‘pete’, ‘pete’, ‘pete’, ‘pete’]
or:
“petepetepetepetepete” / 5 = [‘pete’]

dividing “hello there” by 20 gives you bytes that are less than 1.0 in size. you could try bits, but lets not.

“hello there” / 20 =
cannot divide “hello there” by 20.

“hello ” + there = “hello there” in basic.

so what does “hello there” – “there” = ?

“hello ”

what does “hello there” – “e” = ?

in my opinion, it makes the most sense for it to result in “hllo thr” though perhaps you think it should only remove a single e. which one? up to you i guess, i went with all of them.

so there you have it, hopefully practical shorthand inspired by basics + for string concatenation.

some of these could be useful if implemented. leave a comment if theres one you like or do not like.

 

 

 

abstraction vs. obstruction: my take

im currently enjoying a conversation between andy mender and myself on the topic of abstractions. based on this, he wrote an excellent post you can read -> here. i will include the first paragraph and encourage you to read the rest:

“A WordPress user, codeinfig, brought my attention to what I was trying to express in my previous blog entry, but somehow failed to name. The patched together arguments I’ve been tossing around recently illustrate a major issue in software design – abstraction vs obstruction. We build abstraction layers to tie together smaller system components (or features) or to provide an easy to use interface for ourselves and others. However, abstraction layers tend to stack ad infinitum, because we all like having our own, right? Unfortunately, handling abstraction layers becomes more and more difficult with each level. Thus, obstruction is born.”

you can find the rest here: https://linuxmender.wordpress.com/2016/05/03/getting-the-job-done-dilemmas/ …as i posted in my comment:

“steve litt hints at motivation being a key factor in whether an abstraction becomes an obstruction. the desire to become a ‘gatekeeper’ is not something i rule out– although like you, i find that there is still quite a lot left when you focus on the more neutral matter of ‘design philosophy.’

in other words, too many abstractions gets in the way regardless of motivations. (and i believe you’re right about transparency and documentation, too.)

in short, i think gatekeeper aspirations will certainly produce unnecessary and opaque abstractions like litt says, but so will shoddy design with benign intentions.

and the user can only withstand so many of these ‘helpers’ stacked like turtles all the way down, before john/joan q. public starts to blame the computer, the developers, the companies that may or may not be to blame.

but cutting through excessive abstractions is the KEY to computer literacy and efficiency– and still so many focus (exclusively) on applications. in my opinion, that is a sure recipe for a helpless (at best, a frustrated) user.” i would add that abstraction is what programming “is all about,” until it is overdone– at which point it goes from being a solution to being a new problem.

 

(my own words following the phrase “as i posted in my comment” are in the public domain, and you are welcome to use or re-use them for any purpose.)