I’m on a boat!
I don’t have internet access while at sea, so I’m posting this on Thursday, while we’re in port at La Paz, Mexico.
I’ve still managed to hit my weekly writing goal, though, thanks to my (rapidly becoming trusty) iPad 🙂
Total words: 1,276
No idea what album I’m going to buy as my reward. Last week I ended up grabbing another blast from the past, Orchestral Movement in the Dark’s Dazzle Ships.
I hear the Black Panther soundtrack’s really good…I might pick up that (once we get home).
Second week of using my new writing score system. Managed to turn out 1,489 words for the new book, so I exceeded my goal (again)!
I rewarded myself last week by buying Bauhaus’ Burning From the Inside. I’d heard of Bauhaus for decades, but never bought one of their albums before, and this article from the AV Club got “She’s in Parties” playing an infinite loop in my brain. So I took the plunge (and the album’s great, btw).
This week I’m thinking of buying something more recent. Not sure what yet, though.
I’m writing over 300 words most days, so I’m thinking of upping my goal, to 6 pages a week, or 1,500 words. I’m about to do a lot of travel over the next few weeks, though — one week on a cruise for vacation, ten days in Northern California for work — so I think it’d be best to wait until after that’s over. 1,250 words a week is going to be hard enough to hit when I’m on the road.
If we designed doorknobs the way we design software, each one would come with a user manual.
They wouldn’t be guaranteed to work. You could spend hundreds of dollars on a new doorknob, only to find the handle doesn’t turn, and the manufacturer doesn’t offer a warranty.
Your doorknob would have options for direction of turn, speed of opening, and click sound, but not shape or color.
Most doorknobs would be sold without locks. You could get a knob with a lock, but it would be $10,000.
Each door in your house would need a different doorknob, depending on what year it was built. Doors from 1994 would need completely different knobs than 1993 doors. Sometimes you’d be able to put a 1995 knob in a 1993 door, but not always.
Modern doorknobs — made only for modern doors — would understand some voice commands, like “What time is it?” and “When did you last close?” But only from one person in the house, and the commands for opening and shutting would be different depending on which knob you bought and which door you installed it in. Most of these voice doorknobs wouldn’t have handles, at all.
Some people would lay awake at night, wondering if our doorknobs were getting too smart, and would one day rise up and kill everyone.
After attending Sunday’s Writers Coffeehouse, I decided to adopt Scott Sigler’s suggestion of a scoring system. Thought it’d be a good way to push me to get back in the writing habit, after the fiasco that was the last few months.
I decided on the following:
- A goal of 1,250 words a week. That’s five pages total, or one page a day if I write every weekday.
- Words on the new novel count full. Words for professional or marketing writing (query letters, etc) count half. So a page of query letter writing equals half a page toward my goal.
- I can’t check the news, or do chores, or pay bills, or anything I usually do in the morning, until after my word count for the day is met.
- If I hit my weekly word count total, I get a reward: buying a new music album. I love getting new music, and albums are cheap enough now that I can buy one once a week and not break the bank.
- If I don’t hit my weekly goal, I get a punishment: no beer or wine for a week. I’m a big craft beer guy, so this hurts: no more pairing a nice IPA with some fish tacos, or a tiramisu with a coffee stout.
One week in, I’m pretty happy with the system. The ban on morning news means I stay focused on my writing when I get up, and can plan out the day’s work.
As a result, I’m writing about 300 – 400 words a day, not 250, so I hit 1,554 words yesterday. If I sustain that pace, I’ll need to up my weekly goal.
So hooray for me! I’ll be getting some new music this week 🙂
Attended my first Writers Coffeehouse in a few months yesterday. I’m glad I did; I came away feeling more like a “real” writer, connected to a community of fellow writers, than I have in a long while.
Plus, our host, Scott Sigler, gave us a system for tracking our progress week by week that I think will help me with my current novel.
Many thanks to Scott Sigler for hosting, and to Mysterious Galaxy for letting us hold it in their (frankly awesome) store!
My notes from the Coffeehouse:
- sports in stories: do enough research that you can color in the character; less detail is more: more detail is more chances to screw it up for people that know it; be specific, but drop it in and move on
- vocal tick, physical mannerism, first name last name: stephen king’s technique; uses for secondary characters as a flag or anchor for readers; establishes it all in one paragraph, then uses throughout
- the scorecard: set a weekly goal, meet it, challenging but doable, set consequences if you don’t make it (scott loses a bass from his collection for two months)
- not sure what to do? write a short story. you’ll accomplish something, and if your brain is distracted by something, that’s what you should work on next
- scott sigler: “how to write your first novel” on youtube: unorthodox writing advice
- his scoring system is based on a page: 250 words.
- when writing first draft, it’s pure words produced
- second draft: each word counts for half, so double the word count goal and achieve that
- third draft: each word only counts one third
- calls with editor, agent, etc: counts for half (ex: 1,000 words an hour means a half hour phone call counts as one page)
- what about research? doesn’t count. research doesn’t pay the bills
- characters, relationships, conflict: all that matters. do just enough research to enable the writing. that’s it
- research trick: find and read a kid’s book on it; they’ve distilled it all for you
- outlines? depends on how much you use them. if you do: single-spaced, count each page of outline as a page, timebox the work (ex: 2 weeks to get the outline done)
- another reason to put off your research: sometimes only when you get to the end do you know what you need to research (backspackling the grenade needed in chapter 30)
- query letters? that’s business, so half-count; set a reasonable goal, like one query letter per week (that’s twelve queries in a quarter, not too shabby)
- and track what you’ve done: on paper, or todo lists, or however, but record your daily work, and total it at the end of the week
- when you make it: celebrate it!
- beta-readers? prefers finding serious readers, not writers. why? TWILIGHT
- best reader is you. take the book, let it sit for six months, come back and read it. you’ll see what you really wrote instead of what you thought you wrote
- reedsy.com: site for finding freelance editors; sigler uses it (but do your research, interview them, etc)
- POV shifts: helps show different aspects of the characters, by giving insights from one pov character about another
- tension: a daily chore that if not done causes trouble (the shining: he has to release the pressure from the boiler every day; lost: they have to go down and push the button every day or else); good way to put a ticking clock in your story
- prisonfall: have the characters in danger from the start, use dealing with that as a way to do your world-building
- muse gone? go write a shitty short story; go write some fan fiction; do something else and come back to itp
- recommends putting first book of a series out for free to start out, to get it in the hands of readers, so you can find your audience
- save the cat: great screenplay writing book, woth chapters about elevator pitches
- attendee recommends donald maas’ workshop; went last week in irvine, learned a lot
- don’t be afraid to say no when you get a contract from a publisher; hold onto all the merchandising, film, etc rights you can
Today’s the first day of NaNoWriMo!
I’ve got a rough outline, written short stories about three of my main characters, and filled in most of the setting.
Time to get cracking.
Realized a few weeks back that I wasn’t making the progress on the short stories that I wanted to. And I wasn’t making any progress on editing the second novel.
And NaNoWriMo is coming.
At first, I made the usual excuses to myself — I’ve lost my morning hour to write, I can catch up on the weekends — but I knew the real reason: fear.
Fear that I wasn’t going fast enough. Fear that I wasn’t writing stories that were good enough. Fear that without an hour to write in, I wouldn’t be able to get anything done.
So I’ve gone back to an old habit: write every day. I have a reminder in my phone, a little task that I can only check off when I’ve done some writing that day.
How much doesn’t matter. 100 words, 250 words, 400 words, don’t care. So long as I write something.
And it’s working. I finished the first draft of one short story early this week, and I’ll have a draft of a second story finished this weekend. When those two are done, I can start planning the NaNoWriMo novel.
So I keep telling myself: Step by step, day by day. One word at a time.
A cracking good read. Illuminates the relationship between Gen X and Gen Y, something that’s always felt a little slippery to me (as someone born in 1979, often thrown in with the Millennials but identifying with Gen X).
Filled with moments that made me nod along (the movie list for Gen X), and others that showed me a corner of the 90s I didn’t know existed (Sassy magazine). The book was clearly a work of love for both Eve and Leonora, and it shows.
Three things I learned:
- Titanic was a huge movie for Gen Y. What I remember as just solid Oscar-bait was apparently perfectly tuned to imprint on young Gen Y brains.
- Clueless can be read as not just a great adaption of Emma, but also as a love story between Gen Y (Cher) and Gen X (Josh), reflecting the complicated relationship between the two generations.
- Complaining about the current tech-driven dating scene is common to Gen Y, though none of them would want to go back to the way things were before.
It was a good Conj. Saw several friends and former co-workers of mine, heard some great talks about machine learning, and got some good ideas to take back to my current gig.
There were some dud talks, too, and others that promised one (awesome) thing and delivered another (boring) thing, but overall it’s inspiring to see how far Clojure has come in just ten years.
My notes from the conference:
KEYNOTE FROM Rich Hickey: Effective Programs, or: 10 years of Clojure
- clojure released 10 years ago
- never thought more than 100 people would use it
- clojure is opinionated
- few idioms, strongly supported
- born out of the pain of experience
- had been programming for 18 years when wrote clojure, mostly in c++, java, and common lisp
- almost every project used a database
- was struck by two language designers that talked disparagingly of databases, said they’d never used them
- situated programs
- run for long time
- have memory that usually resides in db
- have to handle the weirdness of reality (ex: “two-fer tuesday” for radio station scheduling)
- interact with other systems and humans
- leverage code written by others (libraries)
- effective: producing the intended result
- prefers above the word “correctness”, none of his programs ever cared about correctness
- but: what is computing about?
- making computers effective in the world
- computers are effective in the same way people are:
- generate predictions from experience
- enable good decisions
- experience => information => facts
- programming is NOT about itself, or just algorithms
- programs are dominated by information processing
- but that’s not all: when we start talking to the database or other libraries, we need different protocols to talk to them
- but there’s more! everything continues to mutate over time (db changes, requirements change, libraries change, etc)
- we aspire to write general purpose languages, but will get very different results depending on your target (phone switches, device drivers, etc)
- clojure written for information-driven situated programs
- clojure design objectives
- create programs out of simpler stuff
- want a low cognitive load for the language
- a lisp you can use instead of java/c# (his common lisp programs were never allowed to run in production)
- says classes and algebraic types are terrible for the information programming problem, claims there are no first-class names, and nothing is composable
- in contrast to java’s multitude of classes and haskell’s multitude of types, clojure says “just use maps”
- says pattern matching doesn’t scale, flowing type information between programs is a major source of coupling and brittleness
- positional semantics (arg-lists) don’t scale, eventually you get a function with 17 args, and no one wants to use it
- sees passing maps as args as a way to attach names to things, thinks it’s superior to positional args or typed args
- “types are an anti-pattern for program maintenance”
- using maps means you can deal with them on a need-to-know basis
- things he left out deliberately:
- parochialism: data types
- “rdf got it right”, allows merging data from different sources, without regard for how the schemas differ
- “more elaborate your type system, more parochial the types”
- in clojure, namespace-qualified keys allow data merging without worrying about colliding schemas (should use the reverse-domain scheme, same as java, but not all clojure libraries do)
- another point: when data goes out over the wire, it’s simple: strings, vectors, maps. clojure aims to have you program the same inside as outside
- smalltalk and common lisp: both languages that were designed by people for working programmers, and it shows
- surprisingly, the jvm has a similar sensibility (though java itself doesn’t)
- also wanted to nail concurrency
- functional gets you 90% of the way there
- pulled out the good parts of lisp
- fixed the bad parts: not everything is a list, packages are bad, cons cell is mutable, lists were kind of functional, but not really
- edn data model: values only, the heart of clojure, compatible with other languages, too
- static types: basically disagrees with everything from the Joy of Types talk
- spec: clojure is dynamic for good reasons, it’s not arbitrary, but if you want checking, it should be your choice, both to use it at all and where to use it
Learning Clojure and Clojurescript by Playing a Game
- inspired by the gin rummy card game example in dr scheme for the scheme programming language
- found the java.awt.Robot class, which can take screenshots and move the mouse, click things
- decided to combine the two, build a robot that could play gin rummy
- robot takes a screenshot, finds the cards, their edges, and which ones they are, then plays the game
- lessons learned:
- when clojurescript came out, decided to rebuild it, but in the browser
- robot still functions independently, but now takes screenshot of the browser-based game
- built a third version with datomic as a db to store state, allowing two clients to play against each other
- absolutely loves the “time travel” aspects of datomic
- also loves pedestal
Bayesian Data Analysis in Clojure
- using clojure for about two years
- developed toolkit for doing bayesian statistics in clojure
- why clojure?
- not as many existing tools ass julia or R
- but: easier to develop new libraries than in julia (and most certainly R)
- other stats languages like matlab and R don’t require much programming knowledge to get started, but harder to dev new tools in them
- open-source clojure lib for generating and working with probability distributions in clojure
- can also provide data and prior to get posterior distribution
- and do posterior-predictive distributions
- wrote a way to generate random functions over a set of points (for trying to match noisy non-linear data)
- was easy in clojure, because of lazy evaluation (can treat the function as defined over an infinite vector, and only pull out the values we need, without blowing up)
- …insert lots of math that i couldn’t follow…
Building Machine Learning Models with Clojure and Cortex
- came from a python background for machine learning
- thinks there’s a good intersection between functional programming and machine learning
- will show how to build a baby classification model in clojure
- expert systems: dominant theory for AI through 2010s
- limitations: sometimes we don’t know the rules, and sometimes we don’t know how to teach a computer the rules (even if we can articulate them)
- can think of the goal of machine learning as being to learn the function F that, when applied to a set of inputs, will produce the correct outputs (labels) for the data
- power of neural nets: assume that they can make accurate approximations for a function of any dimensionality (using simpler pieces)
- goal of neural nets is to learn the right coefficients or weights for each of the factors that affect the final label
- deep learning: a “fat” neural net…basically a series of layers of perceptrons between the input and output
- why clojure? we already have a lot of good tools in other languages for doing machine learning: tensorflow, caffe, theano, torch, deeplearning4j
- functional composition: maps well to neural nets and perceptrons
- also maps well to basically any ML pipeline: data loading, feature extraction, data shuffling, sampling, recursive feedback loop for building the model, etc
- clojure really strong for data processing, which is a large part of each step of the ML pipeline
- ex: lazy sequences really help when processing batches of data multiple times
- can also do everything we need with just the built-in data structures
- cortex: meant to be the theano of clojure
- basically: import everything from it, let it do the heavy lifting
- backend: compute lib executes on both cpu and gpu
- implements as much of neural nets as possible in pure clojure
- meant to be highly transparent and highly customizable
- cortex represents neural nets as DAG, just like tensorflow
- nodes, edges, buffers, streams
- basically, a map of maps
- can go in at any time and see exactly what all the parameters are, for everything
- basic steps to building model:
- load and process data (most time consuming step until you get to tuning the model)
- define minimal description of the model
- build full network from that description and train it on the model
- for example: chose a credit card fraud dataset
Clojure: Scaling the Event Stream
- director, programmer of his own company
- recommends ccm-clj for cassandra-clojure interaction
- expertise: high-availability streaming systems (for smallish clients)
- systems he builds deal with “inconvenient” sized data for non-apple-sized clients
- has own company: troy west, founded three years ago
- one client: processes billions of emails, logs 10–100 events per email, multiple systems log in different formats, 5K–50K event/s
- 10–100 TB of data
- originally, everything logged on disk for analysis after the fact
- requirements: convert events into meaning, support ad-hoc querying, generate reports, do real-time analysis and alerting, and do it all without falling over at scale or losing uptime
- early observations:
- each stream is a seq of immutable facts
- want to index the stream
- want to keep the original events
- want idempotent writes
- just transforming data
- originally reached for java, since that’s the language he’s used to using
- in-flight: kafka
- compute over the data: storm (very powerful, might move in the direction of onyx later on)
- at-rest: cassandra (drives more business to his company than anything else)
- kafka: partitioning really powerful tool for converting one large problem into many smaller problems
- storm: makes it easy to spin up more workers to process individual pieces of your computation
- cassandra: their source of truth
- query planner, query optimizer: services written in clojure, instead of throwing elasticsearch at the problem
- recommends: Designing Data-Intensive Applications, by Martin Kleppmann
- thinks these applications are clojure’s killer app
- core.async gave them fine-grained control of parallelism
- recommends using pipeline-async as add-on tool
- composeable channels are really powerful, lets you set off several parallel ops at once, as they return have another process combine their results and produce another channel
- but: go easy on the hot sauce, gets very tempting to put it everywhere
- instaparse lib critical to handling verification of email addresses
- REPL DEMO
- some numbers: 0 times static types would have saved the day, 27 repos, 2 team members
The Power of Lacinia and Hystrix in Production
- few questions:
- anyone tried to combine lysinia and hystrix?
- anyone played with lacinia?
- anyone used graphql?
- anyone used hystrix?
- hystrix : circuit-breaker implementation
- lacinia: walmart-labs’ graphql
- why both?
- simple example: ecommerce site, aldo shoes, came to his company wanting to rennovate the whole website
- likes starting his implementations by designing the model/schema
- in this case, products have categories, and categories have parent/child categories, etc
- uses graphvis to write up his model designs
- initial diagram renders it all into a clojure map
- they have a tool called umlaut that they used to write a schema in a single language, then generate via instaparse representations in graphql, or clojure schema, etc
- lacinia resolver: takes a graphql query and returns json result
- lacinia ships with a react application called GraphiQL, that allows you to through the browser explore your db (via live queries, etc)
- gives a lot of power to the front-end when you do this, lets them change their queries on the fly, without having to redo anything on the backend
- problem: the images are huge, 3200×3200 px
- need something smaller to send to users
- add a new param to the schema: image-obj, holds width and height of the image
- leave the old image attribute in place, so don’t break old queries
- can then write new queries on the front-end for the new attribute, fetch only the size of image that you want
- one thing he’s learned from marathon running (and stolen from the navy seals): “embrace the suck.” translation: the situation is bad. deal with it.
- his suck: ran into problem where front-end engineers were sending queries that timed out against the back-end
- root cause: front-end queries hitting backend that used 3rd-party services that took too long and broke
- wrote a tiny latency simulator: added random extra time to round-trip against db
- even with 100ms max, latency diagram showed ~6% of the requests (top-to-bottom) took over 500ms to finish
- now tweak it a bit more: have two dependencies, and one of them has a severe slowdown
- now latency could go up to MINUTES
- initial response: reach for bumping the timeouts
- time for hystrix: introduce a circuit breaker into the system, to protect the system as a whole when an individual piece goes down
- hystrix has an official cloure wrapper (!)
- provides a macro: defcommand, wrap it around functions that will call out to dependencies
- if it detects a long timeout, in the future, it will fail immediately, rather than waiting
- as part of the macro, can also specify a fallback-fn, to be called when the circuit breaker is tripped
- adding that in, the latency diagram is completely different. performance stays fast under much higher load
- failback strategies:
- fail fast
- fail silently
- send static content
- use cached content
- use stubbed content: infer the proper response, and send it back
- chained fallbacks: a little more advanced, like connecting multiple circuit breakers in a row, in case one fails, the other can take over
- hystrix dashboard: displays info on every defcommand you’ve got, tracks health, etc
- seven takeaways
- MUST embrace change in prod
- MUST embrace failure: things are going to break, you might as well prepare for it
- graphql is just part of the equation, if your resolvers get too complex, can introduce hystrix and push the complexity into other service
- monitor at the function level (via hystrix dashboard)
- adopt a consumer-driven mindset: the users have the money, don’t leave their money on the table by giving them a bad experience
- force yourself to think about fallbacks
- start thinking about the whole product: production issues LOOK to users like production features
- question: do circuit-breakers introduce latency?
- answer: a little bit at the upper end, once it’s been tripped
The Tensors Must Flow
- works at magento, lives in philly
- really wants to be sure our future robot masters are written in clojure, not python
- guildsman: tensorflow library for clojure
- tensorflow: ML lib from google, recently got a c api so other languages can call into it
- spoiler alert: don’t get TOO excited. everything’s still a bit of a mess
- but it DOES work, promise
- note on architecture: the python client (from google) has access to a “cheater” api that isn’t part of the open c api. thus, there’s some things it can do that guildsman can’t because the api isn’t there
- also: ye gods, there’s a lot of python in the python client. harder to port everything over to guildsman than he thought
- very recently, tensorflow started shipping with a java layer built on top of a c++ lib (via jni), which itself sits on top of the tensorflow c api, some people have started building on top of that
- but not guildsman: it sits diretly on the c api
- in guildsman: put together a plan, then build it, and execute it
- functions like guildsman/add produce plan maps, instead of executing things themselves
- simple example: adding two numbers: just one line in guildsman
- another simple example: have machine learn to solve | x – 2.0 | by finding the value of x that minimizes it
- tensorflow gives you the tools to find minima/maxima: gradient descent, etc
- gradient gap: guildsman can use either the clojure gradients, or the c++ ones, but not both at once
- needs help to port the c++ ones over to clojure (please!)
- “python occupies the 9th circle of dependency hell”: been using python lightly for years, and still has problems getting dependencies resolved (took a left turn at the virtual environment, started looking for my oculus rift)
- demo: using mnist dataset, try to learn to recognize handwritten characters
The Dawn of Lisp: How to Write Eval and Apply in Clojure
- educator, started using scheme in 1994, picked up clojure in 2009
- origins of lisp: john mccarthy’s paper: recursive functions of symbolic expressions and their computation by machine, part i
- implementation of the ideas of alonzo church, from his book “the calculi of lambda-conversion”
- “you can always tell the lisp programmers, they have pockets full of punch cards with lots of closing parenthses on them”
- steve russel (one of the creators of spaceware) was the first to actually implement the description from mccarthy’s paper
- 1962: lisp 1.5 programmer’s manual, included section on how to define lisp in terms of itself (section 1.6: a universal lisp function)
- alan kay described this definition (of lisp in terms of lisp) as the maxwell equations of software
- how eval and apply work in clojure:
- eval: send it a quoted list (data structure, which is also lisp), eval will produce the result from evaluating that list
- apply: takes a function and a quoted list, applies that function to the list, then returns the result
- ex: (apply + ‘(2 2)) => 4
- rules for converting the lisp 1.5 spec to clojure
- convert all m-expression to s-expressions
- keep the definitions as close to original as possible
- drop the use of dotted pairs
- give all global identifiers a ‘$’ prefix (not really the way clojure says it should be used, but helps the conversion)
- add whitespace for legibility
- m-expressions vs s-expressions:
- F[1;2;3] becomes (F 1 2 3)
- [X < 0 -> -X; T -> X] becomes (COND ((< X 0) (- X)) (T X))
- dotted pairs
- basically (CONS (QUOTE A) (QUOTE B))
- definitions: $T -> true, $F -> false, $NIL, $cons, $atom, $eq, $null, $car, $cdr, $caar, $cdar, $caddr, $cadar
- note: anything that cannot be divided is an atom, no relation to clojure atoms
- last few: various combos of car and cdr for convenience
- elaborate definitions:
- $cond: own version of cond to keep syntax close to the original
- $pairlis: accepts three args: two lists, and a list of existing pairs, combines the first two lists pairwise, and combines with the existing paired list
- $assoc: lets you pull key-value pair out of an association list (list of paired lists)
- $evcon: takes list of paired conditions and expressions, plus a context, will return the result of the expression for the first condition that evaluates to true
- $evlist: takes list of expressions, with a condition, and a context, and then evalutes the result of the condition + the expression in a single list
- live code demo
INVITED TALK FROM GUY STEELE: It’s time for a New Old Language
- “the most popular programming language in computer science”
- no compiler, but lots of cross-translations
- would say the name of the language, but doesn’t seem to have one
- so: CSM (computer science metanotation)
- has built-in datatypes, expressions, etc
- it’s beautiful, but it’s getting messed up!
- walk-throughs of examples, how to read it (drawn from recent ACM papers)
- “isn’t it odd that language theorists wanting to talk about types, do it in an untyped language?”
- wrote toy compiler to turn latex expressions of CSM from emacs buffer into prolog code, proved it can run (to do type checking)
- inference rules: Gentzen Notation (notation for “natural deduction”)
- BNF: can trace it all the way back to 4th century BCE, with Panini’s sanskrit grammar
- regexes: took thirty years to settle on a notation (51–81), but basically hasn’t changed since 1981!
- final form of BNF: not set down till 1996, though based on a paper from 1977
- but even then, variants persist and continue to be used (especially in books)
- variants haven’t been a problem, because they common pieces are easy enough to identify and read
- modern BNF in current papers is very similar to classic BNF, but with 2 changes to make it more concise:
- use single letters instead of meaningful phrases
- use bar to indicate repetition instead of … or *
- substitution notation: started with Church, has evolved and diversified over time
- current favorite: e[v/x] to represent “replace x with v in e”
- number in live use has continued to increase over time, instead of variants rising and falling (!)
- bigger problem: some sub variants are also being used to mean function/map update, which is a completely different thing
- theory: these changes are being driven by the page limits for computer science journals (all papers must fit within 10 years)
- overline notation (dots and parentheses, used for grouping): can go back to 1484, when chuquet used underline to indicate grouping
- 1702: leibnitz switched from overlines to parentheses for grouping, to help typesetters publishing his books
- three notations duking it out for 300 years!
- vectors: notation -> goes back to 1813, and jean-robert argand (for graphing complex numbers)
- nested overline notation leads to confusion: how do we know how to expand the expressions that are nested?
- one solution: use an escape from the defaults, when needed, like backtick and tilde notation in clojure
- CMS is a completely valid language
- should be a subject of study
- has issues, but those can be fixed
- would like to see a formal theory of the language, along with tooling for developing in it, checking it, etc
- thinks there are opportunities for expressing parallelism in it
Declarative Deep Learning in Clojure
- starts with explanation of human cognition and memory
- at-your-desk memory vs in-the-hammock memory
- limitation of neural networks: once trained for a task, it can’t be retrained to another without losing the first
- if you train a NN to recognize cats in photos, you can’t then ask it to analyze a time series
- ART architecture: uses two layers, F1 and F2, the first to handle data that has been seen before, the second to “learn” on data that hasn’t been encountered before
- LSTM-cell processing:
- what should we forget?
- what’s new that we care about?
- what part of our updated state should we pass on?
- dealing with the builder pattern in java: more declarative than sending a set of ordered args to a constructor
- his lib allows keyword args to be passed in to the builder function, don’t have to worry about ordering or anything
- by default, all functions produce a data structure that evaluates to a d4j object
- live demos (but using pre-trained models, no live training, just evaluation)
- what’s left?
- front end
- kafka support
- reinforcement learning
Learning Clojure Through Logo
- disclaimer: personal views, not views of employers
- logo: language to control a turtle, with a pen that can be up (no lines) or down (draws lines as it moves)
- …technical difficulties, please stand by…
- live demo of clojure/script version in the browser
- turns out the logo is a lisp (!): function call is always in first position, give it all args, etc
- even scratch is basically lisp-like
- irony: we’re using lisp to teach kids how to program, but then they go off to work in the world of curly braces and semicolons
- clojure-turtle lib: open-source implementation of the logo commands in clojure
- more live demos
- recommends reading seymour papert’s book: “Mindstorms: Children, Computers, and Powerful Ideas”
- think clojure (with the power of clojurescript) is the best learning language
- have a tutorial that introduces the turtle, logo syntax, moving the turtle, etc
- slowly introduces more and more clojure-like syntax, function definitions, etc
- fairly powerful environment: can add own buttons for repeatable steps, can add animations, etc
- everything’s in the browser, so no tools to download, nothing to mess with
- “explaining too early can hurt”: want to start with as few primitives as possible, make the intro slow
- can create your own lessons in markdown files, can just append the url to the markdown file and it’ll load (!)
- prefer that you send in the lessons to them, so they can put them in the lessons index for everyone to benefit
- have even translated the commands over to multiple languages, so you don’t have to learn english before learning programming (!)
- lib: cban, which has translations of clojure core, can be used to offer translations of your lib code into something other than english
- clojurescript repls: Klipse (replaces all clojure code in your page with an interactive repl)
- comments/suggestions/contributions welcome
Got multiple rejections this week.
One was from an agent I’d queried about representing my novel. That was the fastest rejection I think I’ve ever gotten. I emailed in the query, and 24 hours later I had a rejection in my inbox.
Second one was for a short story I’ve been shopping around. The editor included feedback on what they liked and what they feel the story needs to improve, though, so I’m taking that as a good sign.
Meanwhile, I’m trying to fight off a cold, edit my third short story from this summer, and start editing my second novel. Oh, and now I need to find a new market to send that newly-rejected short story to.
Sometimes I wish I could take a week off the day job just to catch up on everything. Sometimes I feel like I’d need a month.