Introducing elm-present

I’m in love with Elm. No, really.

I don’t know if it’s just that I’ve been away from front-end development for a few years, but working in Elm has been a breath of fresh air.

When my boss offered to let us give Tech Talks on any subject at the last company meetup, I jumped at the chance to talk about Elm.

And, of course, if I was going to give a talk about Elm, I had to make my slides in Elm, didn’t I?

So I wrote elm-present.

It’s a (very) simple presentation app. Slides are json files that have a title, some text, a background image, and that’s it. Each slide points to the one before it, and the one after, for navigation.

elm-present handles reading in the files, parsing the json, and displaying everything (in the right order).

And the best part? You don’t need a server to run it. Just push everything up to Dropbox, open the present.html file in your browser, and voil√†!

You can see the talk I gave the meetup here, as a demo.

Seven More Languages in Seven Weeks: Julia

Julia feels…rough.

There are parts I absolutely love, like the strong typing, the baked-in matrix operations, and the support for multi-type dispatch.

Then there’s the pieces that seem incomplete. Like the documentation, which is very extensive, but proved useless when trying to find out the proper way to build dictionaries in the latest version. Or the package system, which will install things right into a running repl (cool!) but does it without getting your permission for all its dependencies (boo).

All in all, I’d like to build something more extensive in Julia. Preferably something ML-related, that I might normally build in Python.

Day One

  • can install with brew
  • book written for 0.3, newest version is 0.5
  • has a repl built in ūüôā
  • typeof for types
  • “” for strings, ” only for single-chars
  • // when you want to divide and leave it divided (no float, keep the fraction)
  • has symbols
  • arrays are typed, but can contain more than one type (will switch from char to any, for example)
  • commas are required for lists, arrays, etc (boo)
  • tuples: fixed sized bags of values, typed according to what they hold
  • arrays carry around their dimensionality (will be important for matrix-type ops later on)
  • has dictionaries as well
  • hmm: typeof({:foo => 5}) -> vector syntax is discontinued
  • Dicts have to be explicitly built now: Dict(:foo => 5) is the equivalent
  • XOR operator with $
  • bits to see the binary of a value
  • can assign to multiple variables at once with commas (like python)
  • trying to access an undefined key in a dict throws an error
  • in can check membership of arrays or iterators, but not dictionaries
  • but: can check for key and value in dict using in + pair of key, value: in(:a => 1, explicit)
  • book’s syntax of using tuple for the search is incorrect
  • julia docs are really…not helpful :/
  • book’s syntax for set construction is also wrong
  • nothing in the online docs to correct it
  • (of course, nothing in the online docs to correct my Dict construction syntax, either)
  • can construct Set with: Set([1, 2, 3])
  • arrays are typed (Any for multiple types)
  • array indexes start at 1, not 0 (!) [follows math here]
  • array slices include the ending index
  • can mutate index by assigning to existing index, but assigning to non-existing index doesn’t append to the array, throws error
  • array notation is row, column
  • * will do matrix multiplication (means # of rows of first has to match # of columns of second)
  • regular element-wise multiplication needs .*
  • need a transpose? just add ‘
  • very much like linear algebra; baked-in
  • dictionaries are typed, will throw error if you try to add key/value to them that doesn’t match the types it was created with
  • BUT: can merge a dict with a dict with different types, creates a new dict with Any to hold the differing types (keys or values)

Day Two

  • if..elseif…end
  • if check has¬†to be a boolean; won’t coerce strings, non-boolean values to booleans (nice)
  • reference vars inside of strings with $ prefix: println(“$a”)
  • has user-defined types
  • can add type constraints to user-defined type fields
  • automatically gets constructor fn with the same name as the type and arguments, one per field
  • subtype only one level
  • abstract types are just ways to group other types
  • no more super(), use supertype() -> suggested by compiler error message, which is nice
  • functions return value of last expression
  • … to get a collection of args
  • +(1, 2) -> yields 3, operators can be used as prefix functions
  • … will expand collection into arguments for a function
  • will dispatch function calls based on the types of all¬†the arguments
  • type on pg 208: int() doesn’t exist, it’s Int()
  • WARNING: Base.ASCIIString is deprecated, use String instead.
  • no need to extend protocols or objects, classes, etc to add new functions for dispatching on core types: can just define the new functions, wherever you like, julia will dispatch appropriately
  • avoids problem with clojure defmulti’s, where you have to bring in the parent lib all the time
  • julia has erlang-like processes and message-passing to handle concurrency
  • WARNING: remotecall(id::Integer,f::Function,args…) is deprecated, use remotecall(f,id::Integer,args…) instead.
  • (remotecall arg order has changed)
  • randbool -> NOPE, try rand(Bool)
  • looks like there’s some overhead in using processes for the first time; pflip_coins times are double the non-parallel version at first, then are reliably twice as fast
  • julia founders answered the interview questions as one voice, with no distinction between them
  • whole section in the julia manual for parallel computing

Day Three

  • macros are based off of lisp’s (!)
  • quote with :
  • names fn no longer exists (for the Expr type, just fine for the Module type)
  • use fieldnames instead
  • unquote -> $
  • invoke macro with @ followed by the args
  • Pkg.add() will fetch directly into a running repl
  • hmm…installs homebrew without checking if it’s on your system already, or if you have it somewhere else
  • also doesn’t *ask* if it’s ok to install homebrew
  • not cool, julia, not cool
  • even then, not all dependencies installed at the time…still needed QuartzIO to display an image
  • view not defined
  • ImageView.view -> deprecated
  • imgshow does nothing
  • docs don’t help
  • hmm…restarting repl seems to have fixed it…window is hidden behind others
  • img no longer has data attribute, is just the pixels now
  • rounding errors means pixels != pixels2
  • ifloor -> floor(Int64, val) now
  • works!

Seven More Languages in Seven Weeks: Elixir

So frustrating. I had high hopes going in that Elixir might be my next server-side language of choice. It’s built on the Erlang VM, after all, so concurrency should be a breeze. Ditto distributed applications and fault-tolerance. All supposedly wrapped in a more digestible syntax than Erlang provides.

Boy, was I misled.

The syntax seems to be heavily Ruby-influenced, in a bad way. There’s magic methods, black box behavior, and OOP-style features built in everywhere.

The examples in this chapter go deeply into this Ruby-flavored world, and skip entirely over what I thought were the benefits to the language. If Elixir makes writing concurrent, distributed applications easier, I have no idea, because this book doesn’t bother working examples that highlight it.

Instead, the impression I get is that this is a way to write Ruby in Erlang, an attempt to push OOP concepts into the functional programming world, resulting in a hideous language that I wouldn’t touch with a ten-foot-pole.

I miss Elm.

Day One

  • biggest influences: lisp, erlang, ruby
  • need to install erlang *and* elixir
  • both available via brew
  • syntax changing quickly, it’s a young language
  • if do:
  • IO.puts for println
  • expressions in the repl always have a return value, even if it’s just :ok
  • looks like it has symbols, too (but they’re called atoms)
  • tuples: collections of fixed size
  • can use pattern matching to destructure tuples via assignment operator
  • doesn’t allow mutable state, but can look like it, because compiler will rename vars and shuffle things around for you if you assign something to price (say) multiple times
  • weird: “pipes” |> for threading macros
  • dots and parens only needed for anonymous functions (which can still be assigned to a variable)
  • prints out a warning if you redefine a module, but lets you do it
  • pattern matching for multiple functions definition in a single module (will run the version of the function that matches the inputs)
  • can define one module’s functions in terms of another’s
  • can use when conditions in function def as guards to regulate under what inputs the function will get run
  • scripting via .exs files, can run with iex
  • put_in returns an updated copy of the map, it doesn’t update the map in place
  • elixir’s lists are linked lists, not arrays!
  • char lists are not strings: dear god
  • so: is_list “string” -> false, but is_list ‘string’ -> true (!)
  • wat
  • pipe to append to the head
  • when destructuring a list, the number of items on each side have to match (unless you use the magic pipe)
  • can use _ for matching arbitrary item
  • Enum for processing lists (running arbitrary functions on them in different ways, like mapping and reducing, filtering, etc)
  • for comprehensions: a lot like python’s list comprehensions; takes a generator (basically ways to pull values from a list), an optional filter (filter which values from the list get used), and a function to run on the pulled values
  • elixir source is on github

Day Two

  • mix is built in to elixir, installing the language installs the build tool (nice)
  • basic project template includes a gitignore, a readme, and test files
  • source files go in lib, not src
  • struct: map with fixed set of fields, that you can add behavior to via functions…sounds like an object to me :/
  • iex -S mix to start iex with modules from your project
  • will throw compiler errors for unknown keys, which is nice, i guess?
  • since built on the erlang vm, but not erlang, we can use macros, which get expanded at compile time (presumably, to erlang code)
  • should is…well…kind of a silly macro
  • __using__ just to avoid a fully-qualified call seems…gross…and too implicit
  • and we’ve got to define new macros to override compile-time behavior? i…i can’t watch
  • module attributes -> compile-time variables -> object attributes by another name?
  • use, __using__, @before_compile -> magic, magic everywhere, so gross
  • state machine’s “beautiful syntax” seems more like obscure indirection to me
  • can elixir make me hate macros?
  • whole thing seems like…a bad example. as if the person writing it is trying to duplicate OOP-style inheritance inside a functional language.
  • elixir-pipes example from the endnotes (github project) is much better at showing the motivation and usage of real macros

Day Three

  • creator’s main language was Ruby…and it shows :/
  • spawn returns the process id of the underlying erlang process
  • pattern matching applies to what to do with the messages a process receives via its inbox
  • can write the code handling the inbox messages *after* the messages are sent (!)
  • task -> like future in clojure, can send work off to be done in another process, then later wait for the return value
  • use of Erlang’s OTP built into Elixir’s library
  • construct the thing with start_link, but send it messages via GenServer…more indirection
  • hmm…claims it’s a “fully distributed server”, but all i see are functions getting called that return values, no client-server relationship here?
  • final example: cast works fine, but call is broken (says process not alive; same message regardless of what command sent in (:rent, :return, etc)
  • oddly enough, it works *until* we make the changes to have the supervisor run everything for us behind the scenes (“like magic!”)
  • endnotes say we learned about protocols, but they were mentioned only once, in day two, as something we should look up on our own :/
  • would have been nicer to actually *use* the concurrency features of language, to, idk, maybe use all the cores on your laptop to run a map/reduce job?

Seven More Languages in Seven Weeks: Elm

Between the move and the election and the holidays, took me a long time to finish this chapter.

But I’m glad I did, because Elm is — dare I say — fun?

The error messages are fantastic. The syntax feels like Haskell without being as obtuse.  Even the package management system just feels nice.

A sign of how much I liked working in Elm: the examples for Day Two and Three of the book were written for Elm 0.14, using a concept called signals. Unfortunately, signals were completely removed in Elm 0.17 (!). So to get the book examples working in Elm 0.18, I had to basically rebuild them. Which meant spending a lot of time with the (admittedly great) Elm tutorial and trial-and-erroring things until they worked again.

None of which I minded because, well, Elm is a great language to work in.

Here’s the results of my efforts:

And here’s what I learned:

Day One

  • haskell-inspired
  • elm-installer: damn, that was easy
  • it’s got a repl!
  • emacs mode also
  • types come back with all the values (expression results)
  • holy sh*t: “Maybe you forgot some parentheses? Or a comma?”
  • omg: “Hint: All elements should be the same type of value so that we can iterate through the list without running into unexpected values.”
  • type inferred: don’t have to explicitly declare the type of every variable
  • polymorphism via type classes
  • single-assignment, but the repl is a little looser
  • pipe syntax for if statement in book is gone in elm 0.17
  • case statement allows pattern matching
  • case statement needs the newlines, even in the repl (use `\`)
  • can build own complex data types (but not type classes)
  • case also¬†needs indentation to work (especially if using result for assignment in the repl)
  • records: abstract types for people without beards
  • changing records: use `=` instead of `<-`: { blackQueen | color = White }
  • records look like they’re immutable now, when they weren’t before? code altering them in day one doesn’t work
  • parens around function calls are optional
  • infers types of function parameters
  • both left and right (!) function composition <| and |>
  • got map and filter based off the List type (?)
  • no special syntax for defining a function versus a regular variable, just set a name equal to the function body (with the function args before the equal sign)
  • head::tail pattern matching in function definition no longer works; elm is now stricter about requiring you to define all the possible inputs, including the empty list
  • elm is a curried language (!)
  • no reduce: foldr or foldl
  • have to paren the infix functions to use in foldr: List.foldr (*) 1 list
  • hard exercise seems to depend on elm being looser than it is; afaict, it won’t let you pass in a list of records with differing fields (type volation), nor will it let you try to access a field that isn’t there (another type violation)

Day Two

  • section is built around signals, which were removed in Elm 0.17 (!)
  • elm has actually deliberately moved away from FRP as a paradigm
  • looks like will need to completely rewrite the sample code for each one as we go…thankfully, there’s good examples in the elm docs (whew!)
  • [check gists for rewritten code]
  • module elm-lang/keyboard isn’t imported in the elm online editor by default anymore

Day Three

  • can fix the errors from loading Collage and Element into main by using toHtml method of the Collage object
  • elm-reactor will give you a hot-updated project listening on port 8000 (so, refresh web page of localhost:8000 and get updated view of what your project looks like)
  • error messages are very descriptive, can work through upgrading a project just by following along (and refreshing alot)
  • critical to getting game working: https://ohanhi.github.io/base-for-game-elm-017.html (multiple subscriptions)

The Problem with Programmer Interviews

You’re a nurse. You go in to interview for a new job at a hospital. You’re nervous, but confident you’ll get the job: you’ve got ten years of experience, and a glowing recommendation from your last hospital.

You get to the interview room. There must be a mistake, though. The room number they gave you is an operating room.

You go in anyway. The interviewer greets you, clipboard in hand. He tells you to scrub up, join the operation in progress.

“But I don’t know anything about this patient,” you say. “Or this hospital.”

They wave away your worries. “You’re a nurse, aren’t you? Get in there and prove it.”

….

You’re a therapist. You’ve spent years counseling couples, helping them come to grips with the flaws in their relationship.

You arrive for your interview with a new practice. They shake your hand, then take you into a room where two men are screaming at each other. Without introducing you, the interviewer pushes you forward.

“Fix them,” he whispers.

You’re a pilot, trying to get a better job at a rival airline. When you arrive at your interview, they whisk you onto a transatlantic flight and sit you in the captain’s chair.

“Fly us there,” they say.

You’re a software engineer. You’ve been doing it for ten years. You’ve seen tech fads come and go. You’ve worked for tiny startups, big companies, and everything in-between. Your last gig got acquired, which is why you’re looking for a new challenge.

The interviewers — there’s three of them, which makes you nervous — smile and shake your hand. After introducing themselves, they wave at the whiteboard behind you.

“Code for us.”

 

Seven More Languages in Seven Weeks: Factor

Continuing on to the next language in the book: Factor.

Factor is…strange, and often frustrating. Where Lua felt simple and easy, Factor feels simple but hard.

Its concatenative syntax looks clean, just a list of words written out in order, but reading it requires you to keep a mental stack in your head at all times, so you can predict what the code does.

Here’s what I learned:

Day One

  • not functions, words
  • pull and push onto the stack
  • no operator precedence, the math words are applied in order like everything else
  • whitespace is significant
  • not anonymous functions: quotations
  • `if` needs quotations as the true and false branches
  • data pushed onto stack can become “out of reach” when more data gets pushed onto it (ex: store a string, and then a number, the number is all you can reach)
  • the `.` word becomes critical, then, for seeing the result of operations without pushing new values on the stack
  • also have shuffle words for just this purpose (manipulating the stack)
  • help documentation crashes; no listing online for how to get word docs in listener (plenty for vocab help, but that doesn’t help me)
  • factor is really hard to google for

Day Two

  • word definitions must list how many values they take from the stack and how many they put back
  • names in those definitions are not args, since they are arbitrary (not used in the word code itself)
  • named global vars: symbols (have get and set; aka getters and setters)
  • standalone code imports NOTHING, have to pull in all needed vocabularies by hand
  • really, really hate the factor documentation
  • for example, claims strings implement the sequence protocol, but that’s not exactly true…can’t use “suffix” on a string, for example

Day Three

  • not maps, TUPLES
  • auto-magically created getters and setters for all
  • often just use f for an empty value
  • is nice to be able to just write out lists of functions and not have to worry about explicit names for their arguments all over the place
  • floats can be an issue in tests without explicit casting (no types for functions, just values from the stack)
  • lots of example projects (games, etc) in the extra/ folder of the factor install

Seven More Languages in Seven Weeks: Lua

Realized I haven’t learned any new programming languages in a while, so I picked up a copy of Seven More Languages in Seven Weeks.

Each chapter covers a different language. They’re broken up into ‘Days’, with each day’s exercises digging deeper into the language.

Here’s what I learned about¬†the first language in the book, Lua:

Day One

Just a dip into basic syntax.

  • table based
  • embeddable
  • whitespace doesn’t matter
  • no integers, only floating-point (!)
  • comparison operators will not coerce their arguments, so you can’t do =42 < ’43’
  • functions are first class
  • has tail-call-optimization (!)
  • extra args are ignored
  • omitted args just get nil
  • variables are global by default (!)
  • can use anything as key in table, including functions
  • array indexes start at 1 (!)

Day Two

Multithreading and OOP.

  • no multithreading, no threads at all
  • coroutines will only ever run on one core, so have to handle blocking and unblocking them manually
  • explicit over implicit, i guess?
  • since can use functions as values in tables, can build entire OO system from scratch using (self) passed in as first value to those functions
  • coroutines can also get you memoization, since yielding means the state of the fn is saved and resumed later
  • modules: can choose what gets exported, via another table at the bottom

Day Three

A very cool project — build a midi player in Lua with C++ interop — that was incredibly frustrating to get working. Nothing in the chapter was helpful. Learned¬†more about C++ and Mac OS X audio than Lua.

  • had to add Homebrew’s Lua include directory (/usr/local/Cellar/lua/5.2.4_3/include) into include_directories command in CMakeLists.txt file
  • when compiling play.cpp, linker couldn’t find lua libs, so had to invoke the command by hand (after reading ld manual) with¬†brew lua lib directory added to its search path via -L
  • basically, add this to CMakeFiles/play.dir/link.txt: -L /usr/local/Cellar/lua/5.2.4_3/lib -L /usr/local/Cellar/rtmidi/2.1.1/lib
  • adding those -L declarations will ensure make will find the right lib directories when doing its ld invocation (linking)
  • also had to go into the Audio Midi Setup utility and set the IAC Driver to device is online in order for any open ports to show up
  • AND then needed to be sure was running the Simplesynth application with the input set to the IAC Driver, to be able to hear the notes

Notes from Strange Loop 2015: Day Two

Pixie

  • lisp
  • own vm
  • compiled using RPython tool chain
  • RPython – reduced python
    • used in PyPy project
    • has own tracing JIT
  • runs on os x, linux, ARM (!)
  • very similar to clojure, but deviates from it where he wanted to for performance reasons
  • has continuations, called stacklets
  • has an open-ended object system; deftype, etc
  • also wanted good foreign function interface (FFI) for calling C functions
  • wants to be able to do :import Math.h :refer cosine
  • ended up writing template that can be called recursively to define everything you want to import
  • writes C file using template that has everything you need and then compiles it and then uses return values with the type info, etc
  • you can actually call python from pixie, as well (if you want)
  • not ready for production yet, but a fun project and PRs welcome

History of Programming Languages for 2 Voices

  • David Nolen and Michael Bernstein
  • a programming language “mixtape”

Big Bang: The World, The Universe, and The Network in the Programming Language

  • Matthias Felleisen
  • worst thought you can have: your kids are in middle school
  • word problems in math are not interesting, they’re boring
  • can use image placement and substitution to create animations out of word problems
  • mistake to teach children programming per se. they should use programming to help their math, and math to help their programming. but no programming on its own
  • longitudinal study: understanding a function, even if you don’t do any other programming ever, means a higher income as an adult
  • can design curriculum taking kids from middle school (programming + math) to high school (scheme), college (design programs), to graduate work (folding network into the language)

Notes from Strange Loop 2015: Day One

Unconventional Programming with Chemical Computing

  • Carin Meier
  • Living Clojure
  • @Cognitect
  • Inspired by Book – unconventional programming paradigms
  • “the grass is computing”
    • all living things process information via chemical reactions on molecular level
    • ¬†hormones
    • immune system
    • bacteria signal processing
  • will NOT be programming with chemicalsusing metaphor of molecules and reactions to do computing
    • nothing currently in the wild using chemical computing
  • at the heart of chemical programming: the reaction
  • will calculate primes two ways:
    • traditional
    • with prime reaction
  • uses clojure for the examples
  • prime reaction
    • think of the integers as molecules
    • simple rule: take a vector of 2 integers, divide them, if the mod is zero, return the result of the division, otherwise, return the vector unchanged
    • name of this procedure: gamma chemical programming
    • reaction is a condition + action
    • execute: replacement of original elements by resulting element
    • solution is known when it results in a steady state (hence, for prime reaction, have to churn over lists of integers multiple times to filter out all the non-primes)
  • possible advantages:
    • modeling probabilistic systems
    • drive a computation towards a global max or min
  • higher order
    • make the functions molecules as well
    • fn could “capture” integer molecules to use as args
    • what does it do?
    • it “hatches” => yields original fn and result of applying fn to the captured arguments
    • reducing reaction fn: return fewer arguments than is taken in
    • two fns interacting: allow to exchange captured values (leads to more “stirring” in the chem sims)
  • no real need for sequential processing; can do things in any order and still get the “right” answer
  • dining philosophers problem
    • something chemical programming handles well
    • two forks: eating philosopher
    • one fork or no forks: thinking philosopher
    • TP with 2fs reacting with EAT => EP
  • “self organizing”: simple behaviors combine to create what look like complex behaviors
  • mail system: messages, servers, networks, mailboxes, membranes
    • membranes control reactions, keep molecules sorted
    • passage through membranes controlled by servers and network
    • “self organizing”

How Machine Learning helps Cancer Research

  • evelina gabasova
  • university of cambridge
  • cost per human genome has gone down from $100mil (2001) to a few thousand dollars (methodology change in mid-2000s paid big dividends)
  • cancer is not a single disease; underlying cause is mutations in the genetic code that regulates protein formation inside the cell
  • brca1 and brca2 are guardians; they check the chromosomes for mistakes and kill cells that have them, so suppress tumor growth; when they stop working correctly or get mutated, you can have tumors
  • clustering: finding groups in data that are more similar to each other than to other data points
    • example: clustering customers
    • but: clustering might vary based on the attributes chosen (or they way those attributes are lumped together)?
    • yes: but choose projection based on which ones give the most variance between data points
    • can use in cancer research by plotting genes and their expression and looking for grouping
  • want to be able to craft more targeted responses to the diagnosis of cancer based on the patient and how they will react
  • collaborative filtering
    • used in netflix recommendation engine
    • filling in cells in a matrix
    • compute as the product of two smaller matrices
    • in cancer research, can help because the number of people with certain mutations is small, leading to a sparsely populated database
  • theorem proving
    • basically prolog-style programming, constraints plus relations leading to single (or multiple) solutions
    • can use to model cancer systems
    • was used to show that chronic myeloid leukemia is a very stable system, that just knocking out one part will not be enough to kill the bad cell and slow the disease; helps with drug and treatment design
    • data taken from academic papers reporting the results of different treatments on different populations
  • machine learning not just for targeted ads or algorithmic trading
  • will become more important in the future as more and more data becomes available
  • Q: how long does the calculation take for stabilization sims?
    • A: for very simple systems, can take milliseconds
  • Q: how much discovery is involved, to find the data?
    • A: actually, whole teams developing text mining techniques for extracting data from academic papers (!)

When Worst is Best

  • Peter Bailis
  • what if we designed computer systems for the worst-case scenarios?
  • website that served 7.3Billion simultaneous users; would on average have lots of idle resources
  • hardware: what if we built this chip for the mars rover? would lead to very expensive packaging (and a lot of R&D to handle low-power low-weight environments)
  • security: all our devs are malicious; makes code deployment harder
  • designing for the worst case often penalizes the average case
  • could we break the curve? design for the worst case and improve the average case too
  • distributed systems
    • almost everything non-trivial is distributed these days
    • operate over a network
    • networks make designs hard
      • packets can be delayed
      • packets may be dropped
    • async network: can’t tell if message has been delayed or dropped
      • handle this by adding replicas that can respond to any request at any time
      • network interruptions don’t stop service
  • no coordination means even when everything is fine, we don’t have to talk
    • possible infinite service scale-out
  • coordinated multi-server transactions pay large penalty as we add more servers (from locks); get more throughput if we let access be uncoordinated
  • don’t care about latency if you don’t have to send messages everywhere
  • but what about the CAP theorem?
    • inktomi from eric brewer: for large scale services, have to trade off between always giving an answer and always giving the right answer
    • takeaway: certain properties of a system (like serializability) require unavailability
    • original paper: cathy lynch
    • common conclusion: availability is too expensive, and we have to give up too much, and it only matters during failures, so forget about it
  • if you use worst case as design tool, you skew toward coordination-avoiding databases
    • high coordination is legacy of old db design
    • coordination-free designs are possible
  • example: read committed isolation
    • goal: never read uncommitted data
    • legacy implementation: lock records during access (coordination)
    • one way: copy on write (x -> x’, do stuff -> write back to x)
    • or: versioning
    • for more detail, see martin’s talk on saturday about transactions
  • research on coordination-free systems have potential for huge speedups
  • other situations where worst-case thinking yields good results
    • replication for fault tolerance can also increase your request-serving capacity
    • fail-over can help deployments/upgrades: if it’s automatic, you can shut off the primary whenever you want and know that the backups will take over, then bring the primary back up when your work is done
    • tail latency in services:
      • avg of 1.2ms (not bad) can mean 0.1% of requests have 100ms (which is terrible)
      • if you’re one of many services being used to fulfill a front-end request, your worst case is more likely to happen, and so drag down the avg latency for the end-user
  • universal design: designing well for everyone; ex: curb cuts, subtitles on netflix
  • sometimes best is brittle: global maximum can sit on top of a very narrow peak, where any little change in the inputs can drive it away from the optimum
  • defining normal defines our designs; considering a different edge case as normal can open up new design spaces
  • hardware: what happens if we have bit flips?
  • clusters: what’s our scale-out strategy?
  • security: how do we audit data access?
  • examine your biases

All In with Determinism for Performance and Testing in Distributed Systems

  • John Hugg
  • VoltDB
  • so you need a replicated setup?
    • could run primary and secondary
    • could allow writes to 2 servers, do conflict detection, and merge all writes
    • NOPE
  • active-active: state a + deterministic op = state b
    • if do same ops across all servers, should end up with the same state
    • have client that sends A B C to coordination system, which then ends ABC to all replicas, which do the ops in order
    • ABC: a logical log, the ordering is what’s important
    • can write log to disk, for later replay
    • can replicate log to all servers, for constant active-active updates
    • can also send log across network for cluster replication
  • look out for non-determinism
    • random numbers
    • wall-clock time
    • record order
    • external systems (ping noaa for weather)
    • bad memory
    • libraries that use randomness for security
  • how to protect from non-determinism?
    • make sure sql is as deterministic as possible
    • 100% of their DML is deterministic
    • rw transactions are hard to make deterministic, have to do a little more planning (swap row-scan for tree-index scan)
    • use seeded random-number generators that are lists created in advance
    • hash up the write ops, and require replicas to send back their computed hashes once the ops are done so the coordinator can confirm the ops were deterministic
    • can also hash the whole replica state when doing a transactional snapshot
    • reduce latency by sending condensed representation of ops instead of all the steps (the recipe name, not the recipe)
  • why do it?
    • replicate faster, reduces concerns for latency
    • persist everything faster: start logging when the work is requested, not when the work is completed
    • bounded sizes: the work comes in as fast as the network allows, so the log will only be written no faster than the network (no firehose)
  • trade-offs?
    • it’s more work: testing, enforcing determinism
    • running mixed versions is scary: if you fix a bug, and you’re running different versions of the software between the replicas, you no longer have deterministic transactions
    • if you trip the safety checks, we shut down the cluster
  • testing?
    • multi-pronged approach: acid, sql correctness, etc
    • simulation a la foundationDB not as useful for them, since they have more states
    • message/state-machine fuzzing
    • unit tests
    • smoke tests
    • self-checking workload (best value)
      • everything written gets self-checked; so to check a read value, write it back out and see if it comes back unchanged
    • use “nefarious app”: application that runs a lot of nasty transactions, checks for ACID failures
    • nasty transactions:
      • read values, hash them, write them back
      • add huge blobs to rows to slow down processing
      • add mayhem threads that run ad-hoc sql doing updates
      • multi-table joins
        • read and write multiple values
      • do it all many many times within the same transaction
    • mix up all different kinds of environment tweaks
    • different jvms
    • different VM hosts
    • different OSes
    • inject latency, disk faults, etc
  • client knows last sent and last acknowledged transaction, checker can be sure recovered data (shut down and restart) contains all the acknowledged transactions

Scaling Stateful Services

  • Caitie MacCaffrey
  • been using stateless services for a long time, depending on db to store and coordinate our state
  • has worked for a long time, but got to place where one db wasn’t enough, so we went to no-sql and sharded dbs
  • data shipping paradigm: client makes request, service fetches data, sends data to client, throws away “stale” data
  • will talk about stateful services, and their benefits, but WARNING: NOT A MAGIC BULLET
  • data locality: keep the fetched data on the service machine
    • lower latency
    • good for data intensive ops where client needs quick responses to operations on large amounts of data
  • sticky connections and consistency
    • using sticky connections and stateful services gives you more consistency models to use: pipelined random access memory, read your write, etc
  • blog post from werner vogel: eventual consistency revisited
  • building sticky connections
    • client connecting to a cluster always gets routed to the same server
  • easiest way: persistent connections
    • but: no stickiness once connection breaks
    • also: mucks with your load balancing (connections might not all last the same amount of time, can end up with one machine holding everything)
    • will need backpressure on the machines so they can break connections when they need to
  • next easiest: routing logic in cluster
    • but: how do you know who’s in the cluster?
    • and: how do you ensure the work is evenly distributed?
    • static cluster membership: dumbest thing that might work; not very fault tolerant; painful to expand;
    • next better: dynamic cluster membership
      • gossip protocols: machines chat about who is alive and dead, each machine on its own decides who’s in the cluster and who’s not; works so long as system is relatively stable, but can lead to split-brain pretty quickly
      • consensus systems: better consistency; but if the consensus truth holder goes down, the whole cluster goes down
  • work distribution: random placement
    • write anywhere
    • read from everywhere
    • not sticky connection, but stateful service
  • work distribution: consistent hashing
    • deterministic request placement
    • nodes in cluster get placed on a ring, request gets mapped to spot in the ring
    • can still have hot spots form, since different requests will have different work that needs to be done, can have a lot of heavy work requests placed on one node
    • work around the hot spots by having larger cluster, but that’s more expensive
  • work distribution: distributed hash table
    • non-deterministic placement
  • stateful services in the real world
  • scuba:
    • in-memory db from facebook
    • believe to be static cluster membership
    • random fan-out on write
    • reads from every machine in cluster
    • results get composed by machine running query
    • results include a completeness metric
  • uber ringpop
    • nodejs library that does application-layer sharding for their dispatching services
    • swim gossip protocol for cluster membership
    • consistent hashing for work distribution
  • orleans
    • from Microsoft Research
    • used for Halo4
    • runtime and programming model for building distributed systems based on Actor Model
    • gossip protocol for cluster membership
    • consistent hashing + distributed hash table for work distribution
    • actors can take request and:
      • update their state
      • return their state
      • create a new Actor
    • request comes in to any machine in cluster, it applies hash to find where the DHT is for that client, then that DHT machine routes the request to the right Actor
    • if a machine fails, the DHT is updated to point new requests to a different Actor
    • can also update the DHT if it detects a hot machine
  • cautions
    • unbounded data structures (huge requests, clients asking for too much data, having to hold a lot of things in memory, etc)
    • memory management (get ready to make friends with the garbage collector profiler)
    • reloading state: recovering from crashes, deploying a new node, the very first connection of a session (no data, have to fetch it all)
    • sometimes can get away with lazy loading, because even if the first connection fails, you know the client’s going to come back and ask for the same data anyway
    • fast restarts at facebook: with lots of data in memory, shutting down your process and restarting causes a long wait time for the data to come back up; had success decoupling memory lifetime from process lifetime, would write data to shared memory before shutting process down and then bring new process up and copy over the data from shared to the process’ memory
  • should i read papers? YES!

Trust is Critical to Building Software

So much of software engineering is built on trust.

I have to trust that the other engineers on my team will pull me back from the brink if i start to spend too much time chasing down a bug. I have to trust that they’ll catch the flaws in my code during code review, and show me how to do it better. When reviewing their code, at some point I have to trust that they’ve at least tested things locally, and written something that works, even if doesn’t work well.

Beyond my team, I have to trust the marketing and sales folks to bring in new customers so we can grow the company. I’ve got to trust the customer support team to keep our current customers happy, and to report bugs they discover that I need to fix. I have to trust the product guys to know what features the customer wants next, so we don’t waste our time building things nobody needs.

And every time I use test fixture someone else wrote, I’m trusting the engineers that worked here in the past. When I push new code, I’m trusting our CI builds to run the tests properly and catch anything that might have broken. By trusting those tests, I’m trusting everyone that wrote them, too.

Every new line of code I write, every test I create, adds to that chain of trust, and brings me into it. As an engineer, I strive to be worthy of that trust, to build software that is a help, and not a burden, to those that rely on it.