Ron Toland
About Canadian Adventures Keeping Score Archive Photos Replies Also on Micro.blog
  • No, Bill C-18 Will Not End the Open Web

    Seeing a lot of fear-mongering on Canada’s Bill C-18, which will require companies like Google and Facebook to actually pay newspapers for copying their articles into their services. People are calling it a link tax, saying it will lead to the End of the Open Web. How could such a “bad bill” make it so far?

    Simply put: Because it doesn’t actually do any of the things these panicked people claim it will.

    Here, go read the executive summary of the bill yourself. Far from being a “link tax,” it’s a — belated — intervention of the Canadian government into a market in the public interest.

    Basically, Google and Facebook don’t just link to work produced by others anymore; they’ll copy it and present it on their own websites. They defend this as something done for the benefit of users, a convenience, but really it’s so users — you and I — won’t leave their sites. The longer we stay on their pages, the more ads we’ll see, and the more money they collect.

    What’s wrong with that? Well, those ads used to be sold on the websites of the people writing those articles — newspapers, magazines, blogs — and so the revenue used to flow directly to those people. The creators. Now that money flows to Google and Facebook, who are getting rewarded for what is basically theft. And that’s one reason — among many, sure, but an important one — why so many news orgs across North America have gone belly up in the last decade and a half.

    So Bill C-18 is an attempt to redress that theft, by requiring large search engine companies to enter into a contract with news orgs — or groups of individual news generators — to compensate them for the work they would otherwise take for free.

    It’s not even that innovative a bill! It’s based on one Australia passed in 2021. Prior to that bill passing, I saw the same fear-mongering and breathless doom and gloom announcements about the “end” of the Open Web. Facebook and Google also pulled the same childish stunts, cutting off news access for Australians prior to the bill’s passing.

    So it passed, and did Australia suddenly become a barren internet wasteland? Ha, no. Facebook and Google obeyed the law, cut deals with 30 different media companies, who raked in millions of additional revenue — revenue that will be used to pay journalists — as a result. Meanwhile, the doom-and-gloom gang have moved on, to beating the same tired drums about Canada’s bill.

    They were wrong about Australia’s law. They’re wrong about Canada’s.

    For a bit more background on the Australian law, and Facebook’s history of bad behaviour, check out this piece by anti-monopolist Matt Stoller.

    → 10:07 AM, Mar 25
  • Are Job Degree Requirements Racist?

    Since reading Ibram X Kendi's How to be an Antiracist, I'm starting to re-examine certain policies I've taken for granted. What I've previously thought of as meritocratic or race-neutral might be neither; it might instead be part of the problem.

    In that book, he gives a clear criteria for whether a policy or idea is a racist one: Does it establish or reinforce racial inequality?

    With that in mind, I thought I'd look at my own house -- the tech industry -- and at our very real tendency to run companies composed mostly of white males.

    There are many reasons why this happens, but I'd like to drill into just one: The university degree requirement.

    Most "good jobs" these days require some sort of university degree. Tech goes one step further, and asks for a degree specifically in computer science or another STEM field.

    The degree isn't enough to get the job, of course. Most interview processes still test skill level at some point. But the field of candidates is narrowed, deliberately, via this requirement.

    The question is: Does requiring this technical degree bias the selection process towards White people?

    Criteria

    Before diving into the statistics, let's back up and talk about the criteria here. How can we tell if the degree requirement biases selection?

    In order to do that, we need to know what an unbiased selection process would look like.

    And here is where it's important to note the composition of the general US population (and why the Census being accurate is so very very important). If all things are equal between racial groups, then the composition of Congress, company boards, and job candidates will reflect their percentages in the population.

    Anything else is inequality between the races, and can only be explained in one of two ways: either you believe there are fundamental differences between people in different racial groups (which, I will point out, is a racist idea), or there are policies in place which are creating the different outcomes.

    With that criteria established, we can examine the possible racial bias of requiring university degrees by looking at two numbers:

    • How many people of each racial group obtain STEM degrees in the United States?
    • How does that compare to their level in the general population?

    Who Has a Degree, Anyway?

    According to 2018 data from the US Census, approximately 52 million people (out of a total US population of 350 million) have a bachelor's degree in the US.

    Of those 51 million, 40.8 million are White.

    Only 4.7 million are Black.

    That means White people hold 79% of all the bachelor degrees, while Black people hold only 9%.

    Their shares of the general population? 76.3% White, 13.4% Black.

    So Whites are overrepresented in the group of people with bachelor degrees, and Blacks are underrepresented.

    So by requiring any university degree, at all, we've already tilted the scales against Black candidates.

    Who is Getting Degrees?

    But what about new graduates? Maybe the above numbers are skewed by previous racial biases in university admissions (which definitely happened), and if we look at new grads -- those entering the workforce -- the percentages are better?

    I'm sorry, but nope. If anything, it's worse.

    Let's drill down to just those getting STEM degrees (since those are the degrees that would qualify you for most tech jobs). In 2015, according to the NSF, 60.5% of STEM degrees were awarded to White people, and only 8.7% of them went to Black people.

    The same report notes that the percentage of degrees awarded to Black people (~9%) has been constant for the last twenty years.

    So universities, far from leveling the racial playing field, actually reinforce inequality.

    Conclusion

    Simply by asking for a university degree, then, we're narrowing our field of candidates, and skewing the talent pool we draw from so that White people are overrepresented.

    Thus, we're more likely to select a White candidate, simply because more White people are able to apply.

    That reinforces racial inequality, and makes requiring a university degree for a job -- any job -- a racist policy.

    What can we do instead? To be honest, if your current interview process can't tell candidates who have the right skills from candidates who don't, then requiring a college degree won't fix it.

    If your interview process leans heavily on discovering a candidate's background, instead of their skills, re-balance it. Come up with ways to measure the skills of a candidate that do not require disclosure of their background.

    In programming, we have all sorts of possible skill-measuring techniques: Asking for code samples, having candidates think through a problem solution during the interview, inviting essay answers to questions that are open-ended but can only be completed by someone with engineering chops.

    By asking for a demonstration of skill, rather than personal history, we'd both make our interviews better -- because we'd be filtering for candidates who have shown they can do the job -- and less biased.

    And if we're serious about increasing diversity in our workplaces, we'll drop the degree requirement.

    → 8:00 AM, Aug 3
  • Review: Brydge Pro Keyboard

    I've tried both Logitech and Zagg's versions of the iPad keyboard/case combo before, and neither of them worked out for me. The Logitech version was rugged and had a good keyboard, but it was too hard to get the iPad out of the case when I wanted to use it as a tablet. The Zagg folio felt cheap, and wasn't comfortable to type on.

    I'm currently using the Apple Smart Keyboard Folio, and it's...fine. The angle that it sets the screen at is too steep to be comfortable, and it doesn't sit very stably on my lap, but it works, and I can type on it fast enough.

    But I've heard a lot of good things about the Brydge keyboards, especially the "it makes it work just like a laptop" line. Comfortable to type on. Holds the screen at any angle you want. Easy to pull it out to become a tablet again.

    So when they recently went on sale -- because the new version, with a built-in trackpad, is coming -- I snapped one up.

    First Impressions

    First off, this thing is absolutely gorgeous in the box. Like, I didn't want to take it out, it was so pretty.

    And the box itself is pretty impressive; it's got a cheatsheet of what all the different function keys do printed right on the inside cover. There's almost no need to refer to the included QuickStart instructions.

    Getting the iPad in the clips isn't too bad. They're stiff, but moveable. Ditto taking it out again. You need a firm grip, and a willingness to pull hard on something you might have paid $1,000 for, but it can be done.

    Typing

    The typing experience on this keyboard is, in a word, miserable.

    My accuracy immediately plunged when I tried typing anything at all on it. The keys are both small and very close together, making the whole thing feel cramped. I felt like I was typing with my hands basically overlapping, it's that small.

    On top of that, the keys sometimes stutter, or miss keystrokes. I had to strike each one much harder than I'm used to, which makes their small size and tight spacing even worse.

    And the keyboard itself has a noticeable lag between when you open it to use it, and when it manages to pair with the iPad. It's a small thing, to be sure, but when you're used to the instantly-on nature of everything else on the iPad, it's a drag to have to wait on your keyboard to catch up.

    Oh, and did I mention the whole thing -- keyboard, screen hinge, everything -- lifts off the table as you tilt the screen back? So the further back the screen goes, the more the keys tilt away from the plane of the desk. Yes, that means you have to adjust your typing to the angle of the screen, which is...not normal?

    Still, a proper Inverted-T for the arrow keys is nice to have back.

    And controlling screen brightness from the keyboard is cool. Not worth losing all that space that could have been put into larger keys or better key gaps (or just better keys, period), but here we are.

    Portability

    Jesus, this thing is heavy. I mean, it feels as heavy as my 16" work laptop. Definitely not feeling footloose and fancy-free while the iPad is locked into it.

    As a bonus, it's really slippery when closed, making it both heavy and hard to hold onto. Just an accidental drop waiting to happen.

    And I don't see how the tiny rubber things sticking up from the case are going to protect my screen when it's closed, especially as the thing ages and those rubber nubs become...nubbier.

    Using it as a Laptop

    The clips holding the iPad in place are really stiff, except when they're not. That is, anytime you forget it's not a real laptop and pick it up by the ipad.

    There's also no way to open it when closed without knocking any Apple Pencil you have attached out of place.

    It's fairly stable on my lap, so long as I don't tilt the screen back too far. There's a point where the whole thing just starts to wobble.

    While the iPad's in it, it's kind of hard to hit the bottom of the screen to dismiss the current application and get the home screen back. Thankfully, they included a dedicated Home button on the keyboard, a nice touch.

    However, the "On-Screen Keyboard" key doesn't work. At all.

    Comparison with the Apple Smart Keyboard Folio

    Using this made me realize things I want in an iPad keyboard that I never noticed before:

    • I don't want to have to worry about plugging my keyboard in.
    • I don't want to worry about having it come on and re-pair it with my iPad every time.
    • I don't want to have to jerk on my iPad every time I want to convert it back to a tablet.

    And the Apple Smart Keyboard Folio checks all of those boxes.

    It's also lighter, and the keys are spaced further apart, making it less cramped. They also don't need as much pressure to activate.

    Final Thoughts

    So, yeah...I've returned the Brydge, and gone back to using the Smart Keyboard Folio.

    I always thought of Apple's version as the "default," and that third-party keyboards would naturally be better. But it turns out weight, portability, and ease-of-use (no charging, always on) matters a lot more to me than I thought.

    → 8:00 AM, Apr 13
  • More on the iPad Pro

    In fact, the iPad Pro hardware, engineering, and silicon teams are probably the most impressive units at Apple of recent years. The problem is, almost none of the usability or productivity issues with iPads are hardware issues.
    Found Craig Mod's essay about the iPad Pro from two years ago. It's an excellent essay, and perfectly relevant today.

    It reminded me why I bought an iPad Pro to begin with: The sheer possibilities inherent in such an ultra-portable, powerful device.

    But he also hits on everything that makes the iPad so frustrating to actually use. The way it wants to keep everything sequestered and hidden, when to really get some work done on it I need to have access to everything, instantly, and sometimes all at once.

    I can get that on a Mac. I can’t on an iPad.

    Which is why I disagree with him that the iPad is good for writing. So much of my writing time is actually spent editing, not drafting, and editing is exactly the kind of thing – lots of context switching, needing to see multiple views of the same document at once – iPad’s are terrible at.

    I sincerely hope that renaming the operating system “iPadOS” means Apple will start fixing some of these glaring problems with the iPad’s software. It’s just so tragic that the hardware is being held back from its full potential by the OS.

    → 9:00 AM, Feb 5
  • iPad Pro: 10 Years Later, and One Year In

    Looking Back

    The iPad's 10 years old this month, and so there's a lot of retrospectives going around.

    Most of them express a disappointment with it, a sense that an opportunity has been missed.

    And they're right. From UI design flaws to bad pricing, the story of the iPad is one of exciting possibilities constantly frustrated.

    For my part, I've owned three different iPads over the past few years. I've ended up returning or selling them all, and going back to the Mac.

    My current iPad Pro is the one I've had the longest. It's made it a full year as my primary computing device, for writing, reading, and gaming.

    But here I am, back typing on my 2014 Mac Mini instead of writing this on the iPad.

    So what's making me switch back?

    It's All About the Text

    For a machine that should be awesome to use as a writer -- it's super-portable, it's always connected to the internet via cell service, it lets me actually touch the words on the screen -- the iPad is very, very frustrating in practice.

    Most of that is due to the sheer incompetence of the UI when it comes to manipulating text.

    Want to move a paragraph around? Good luck:

    • You'll need to tap the screen once, to activate "entering text" mode on whatever application you're in.
    • Then you'll need to double-tap, to indicate you want to select some text.
    • Then you'll need to move two tiny targets around to select the text you want. Tap anywhere else than exactly on those targets, and you'll leave select-text mode entirely, and have to start over.
    • If you should accidentally need to select text that's slightly off-screen, more fool you: once your dragging finger hits the screen edge, it'll start scrolling like crazy, selecting all the text you find. And getting back to the start means lifting your finger off the select area and scrolling, which will kick you out of select-text mode. You've got to start over now.
    • Even if all your desired text is on one screen, those tiny endpoints you’re moving can start to stutter and skip around at the end of a paragraph or section of text. You know, exactly where you’d probably want to place them.
    • If you should somehow succeed in getting just the text you want selected, you need to move it. Press on the text, but not too firmly, to watch it lift off the screen. Then drag it to where you need it. Try not to need to drag it off the edge of the screen, or you'll get the same coked-out scrolling from before. And don't bother looking for a prompt or anything to indicate where this text is going to end up. Apple expects you to use the Force, young padawan.

    That's right. A process that is click-drag-Cmd-c-Cmd-v on a Mac is a multi-step game of Operation that you'll always lose on an iPad.

    So I’ve gotten in the habit of writing first drafts on the iPad, and editing them on the Mac.

    But that assumes iCloud is working.

    iCloud: Still Crazy After All These Years

    Most of the writing apps on the iPad have switched to using iCloud to sync preferences, folder structure, tags, and the documents themselves.

    Makes sense, right? Use the syncing service underlying the OS.

    Except it doesn't always work.

    I've had docs vanish. I've popped into my iPhone to type a few notes in an existing doc, then waited days for those same notes to show up in the document on my iPad.

    iOS 13 made all this worse, by crippling background refresh. So instead of being able to look down and see how many Todos I have left to do, or Slack messages waiting for me, I have to open all these applications, one by one, to get them to refresh. It's like the smartphone dark ages.

    Since my calendars, email, etc aren't getting refreshed correctly, my writing doesn't either. I tell you, nothing makes me want to throw my iPad across the room more than knowing a freaking block of text is there in a doc because I can see it on my iPhone but it hasn't shown up in the iPad yet. Because not only do I not have those words there to work with, but if I make the assumption that I can continue editing the thing before sync completes, I'm going to lose the other words entirely.

    But there's Dropbox, you say. Yes, Dropbox works. But Dropbox is slow, the interface is clunky, and their stance on privacy is...not great.

    You Still Can't Code On It

    I'm a multi-class programmer/writer. I write words and code. I need a machine that does both.

    The iPad has been deliberately crippled, though, so no matter how fast they make the chip inside, it'll never be able to do the most basic task of computing: Allow the user to customize it.

    You can't write iOS software on an iPad. You can't write a little python script and watch it execute. You can't learn a new programming language on an iPad by writing code and seeing what it does to the machine.

    You can't even get a proper terminal on it.

    You're locked out of it, forever.

    And that's the ultimate tragedy of the iPad. Not that the UI was broken, or the original Apple pricing for its software was wrong.

    It's that its users aren't allowed to take it to its full potential.

    Because that's what it needs. Users have to be able to fix the things that are broken, in whatever creative way they see fit, for a piece of technology to become revolutionary.

    And they have to be able to do it right there, on the device, without having to invest thousands of dollars in a different machine that can run the bloated thing XCode has become.

    It's that barrier, that huge NO painted across the operating system, that ultimately frustrates me about the iPad. Because it doesn't have to be there. It was designed and built deliberately, to keep us out.

    → 9:00 AM, Feb 3
  • Notes from Clojure/Conj 2017

    It was a good Conj. Saw several friends and former co-workers of mine, heard some great talks about machine learning, and got some good ideas to take back to my current gig.

    There were some dud talks, too, and others that promised one (awesome) thing and delivered another (boring) thing, but overall it’s inspiring to see how far Clojure has come in just ten years.

    My notes from the conference:

    DAY ONE

    KEYNOTE FROM Rich Hickey: Effective Programs, or: 10 years of Clojure

    • clojure released 10 years ago
    • never thought more than 100 people would use it
    • clojure is opinionated
      • few idioms, strongly supported
      • born out of the pain of experience
    • had been programming for 18 years when wrote clojure, mostly in c++, java, and common lisp
    • almost every project used a database
    • was struck by two language designers that talked disparagingly of databases, said they’d never used them
    • situated programs
      • run for long time
      • data-driven
      • have memory that usually resides in db
      • have to handle the weirdness of reality (ex: “two-fer tuesday” for radio station scheduling)
      • interact with other systems and humans
      • leverage code written by others (libraries)
    • effective: producing the intended result
      • prefers above the word “correctness”, none of his programs ever cared about correctness
    • but: what is computing about?
      • making computers effective in the world
      • computers are effective in the same way people are:
        • generate predictions from experience
        • enable good decisions
      • experience => information => facts
    • programming is NOT about itself, or just algorithms
    • programs are dominated by information processing
    • but that’s not all: when we start talking to the database or other libraries, we need different protocols to talk to them
    • but there’s more! everything continues to mutate over time (db changes, requirements change, libraries change, etc)
    • we aspire to write general purpose languages, but will get very different results depending on your target (phone switches, device drivers, etc)
    • clojure written for information-driven situated programs
    • clojure design objectives
      • create programs out of simpler stuff
      • want a low cognitive load for the language
      • a lisp you can use instead of java/c# (his common lisp programs were never allowed to run in production)
    • says classes and algebraic types are terrible for the information programming problem, claims there are no first-class names, and nothing is composable
    • in contrast to java’s multitude of classes and haskell’s multitude of types, clojure says “just use maps”
    • says pattern matching doesn’t scale, flowing type information between programs is a major source of coupling and brittleness
    • positional semantics (arg-lists) don’t scale, eventually you get a function with 17 args, and no one wants to use it
    • sees passing maps as args as a way to attach names to things, thinks it’s superior to positional args or typed args
    • “types are an anti-pattern for program maintenance”
    • using maps means you can deal with them on a need-to-know basis
    • things he left out deliberately:
      • parochialism: data types
      • “rdf got it right”, allows merging data from different sources, without regard for how the schemas differ
      • “more elaborate your type system, more parochial the types”
      • in clojure, namespace-qualified keys allow data merging without worrying about colliding schemas (should use the reverse-domain scheme, same as java, but not all clojure libraries do)
      • another point: when data goes out over the wire, it’s simple: strings, vectors, maps. clojure aims to have you program the same inside as outside
    • smalltalk and common lisp: both languages that were designed by people for working programmers, and it shows
      • surprisingly, the jvm has a similar sensibility (though java itself doesn’t)
    • also wanted to nail concurrency
      • functional gets you 90% of the way there
    • pulled out the good parts of lisp
    • fixed the bad parts: not everything is a list, packages are bad, cons cell is mutable, lists were kind of functional, but not really
    • edn data model: values only, the heart of clojure, compatible with other languages, too
    • static types: basically disagrees with everything from the Joy of Types talk
    • spec: clojure is dynamic for good reasons, it’s not arbitrary, but if you want checking, it should be your choice, both to use it at all and where to use it
    Learning Clojure and Clojurescript by Playing a Game
    • inspired by the gin rummy card game example in dr scheme for the scheme programming language
    • found the java.awt.Robot class, which can take screenshots and move the mouse, click things
    • decided to combine the two, build a robot that could play gin rummy
    • robot takes a screenshot, finds the cards, their edges, and which ones they are, then plays the game
    • lessons learned:
      • java interop was great
    • when clojurescript came out, decided to rebuild it, but in the browser
    • robot still functions independently, but now takes screenshot of the browser-based game
    • built a third version with datomic as a db to store state, allowing two clients to play against each other
    • absolutely loves the “time travel” aspects of datomic
    • also loves pedestal
    Bayesian Data Analysis in Clojure
    • using clojure for about two years
    • developed toolkit for doing bayesian statistics in clojure
    • why clojure?
      • not as many existing tools ass julia or R
      • but: easier to develop new libraries than in julia (and most certainly R)
      • other stats languages like matlab and R don’t require much programming knowledge to get started, but harder to dev new tools in them
    • michaellindon/distributions
      • open-source clojure lib for generating and working with probability distributions in clojure
      • can also provide data and prior to get posterior distribution
      • and do posterior-predictive distributions
    • wrote a way to generate random functions over a set of points (for trying to match noisy non-linear data)
    • was easy in clojure, because of lazy evaluation (can treat the function as defined over an infinite vector, and only pull out the values we need, without blowing up)
    • …insert lots of math that i couldn’t follow…
    Building Machine Learning Models with Clojure and Cortex
    • came from a python background for machine learning
    • thinks there’s a good intersection between functional programming and machine learning
    • will show how to build a baby classification model in clojure
    • expert systems: dominant theory for AI through 2010s
      • limitations: sometimes we don’t know the rules, and sometimes we don’t know how to teach a computer the rules (even if we can articulate them)
    • can think of the goal of machine learning as being to learn the function F that, when applied to a set of inputs, will produce the correct outputs (labels) for the data
    • power of neural nets: assume that they can make accurate approximations for a function of any dimensionality (using simpler pieces)
    • goal of neural nets is to learn the right coefficients or weights for each of the factors that affect the final label
    • deep learning: a “fat” neural net…basically a series of layers of perceptrons between the input and output
    • why clojure? we already have a lot of good tools in other languages for doing machine learning: tensorflow, caffe, theano, torch, deeplearning4j
    • functional composition: maps well to neural nets and perceptrons
    • also maps well to basically any ML pipeline: data loading, feature extraction, data shuffling, sampling, recursive feedback loop for building the model, etc
    • clojure really strong for data processing, which is a large part of each step of the ML pipeline
      • ex: lazy sequences really help when processing batches of data multiple times
      • can also do everything we need with just the built-in data structures
    • cortex: meant to be the theano of clojure
      • basically: import everything from it, let it do the heavy lifting
    • backend: compute lib executes on both cpu and gpu
    • implements as much of neural nets as possible in pure clojure
    • meant to be highly transparent and highly customizable
    • cortex represents neural nets as DAG, just like tensorflow
      • nodes, edges, buffers, streams
    • basically, a map of maps
      • can go in at any time and see exactly what all the parameters are, for everything
    • basic steps to building model:
      • load and process data (most time consuming step until you get to tuning the model)
      • define minimal description of the model
      • build full network from that description and train it on the model
    • for example: chose a credit card fraud dataset
    Clojure: Scaling the Event Stream
    • director, programmer of his own company
    • recommends ccm-clj for cassandra-clojure interaction
    • expertise: high-availability streaming systems (for smallish clients)
    • systems he builds deal with “inconvenient” sized data for non-apple-sized clients
    • has own company: troy west, founded three years ago
    • one client: processes billions of emails, logs 10–100 events per email, multiple systems log in different formats, 5K–50K event/s
    • 10–100 TB of data
    • originally, everything logged on disk for analysis after the fact
    • requirements: convert events into meaning, support ad-hoc querying, generate reports, do real-time analysis and alerting, and do it all without falling over at scale or losing uptime
    • early observations:
      • each stream is a seq of immutable facts
      • want to index the stream
      • want to keep the original events
      • want idempotent writes
      • just transforming data
    • originally reached for java, since that’s the language he’s used to using
    • data
      • in-flight: kafka
      • compute over the data: storm (very powerful, might move in the direction of onyx later on)
      • at-rest: cassandra (drives more business to his company than anything else)
    • kafka: partitioning really powerful tool for converting one large problem into many smaller problems
    • storm: makes it easy to spin up more workers to process individual pieces of your computation
    • cassandra: their source of truth
    • query planner, query optimizer: services written in clojure, instead of throwing elasticsearch at the problem
    • recommends: Designing Data-Intensive Applications, by Martin Kleppmann
    • thinks these applications are clojure’s killer app
    • core.async gave them fine-grained control of parallelism
    • recommends using pipeline-async as add-on tool
    • composeable channels are really powerful, lets you set off several parallel ops at once, as they return have another process combine their results and produce another channel
    • but: go easy on the hot sauce, gets very tempting to put it everywhere
    • instaparse lib critical to handling verification of email addresses
    • REPL DEMO
    • some numbers: 0 times static types would have saved the day, 27 repos, 2 team members
    DAY TWO

    The Power of Lacinia and Hystrix in Production

    • few questions:
      • anyone tried to combine lysinia and hystrix?
      • anyone played with lacinia?
      • anyone used graphql?
      • anyone used hystrix?
    • hystrix : circuit-breaker implementation
    • lacinia: walmart-labs’ graphql
    • why both?
    • simple example: ecommerce site, aldo shoes, came to his company wanting to rennovate the whole website
    • likes starting his implementations by designing the model/schema
    • in this case, products have categories, and categories have parent/child categories, etc
    • uses graphvis to write up his model designs
    • initial diagram renders it all into a clojure map
    • they have a tool called umlaut that they used to write a schema in a single language, then generate via instaparse representations in graphql, or clojure schema, etc
    • lacinia resolver: takes a graphql query and returns json result
    • lacinia ships with a react application called GraphiQL, that allows you to through the browser explore your db (via live queries, etc)
    • gives a lot of power to the front-end when you do this, lets them change their queries on the fly, without having to redo anything on the backend
    • problem: the images are huge, 3200x3200 px
    • need something smaller to send to users
    • add a new param to the schema: image-obj, holds width and height of the image
    • leave the old image attribute in place, so don’t break old queries
    • can then write new queries on the front-end for the new attribute, fetch only the size of image that you want
    • one thing he’s learned from marathon running (and stolen from the navy seals): “embrace the suck.” translation: the situation is bad. deal with it.
    • his suck: ran into problem where front-end engineers were sending queries that timed out against the back-end
    • root cause: front-end queries hitting backend that used 3rd-party services that took too long and broke
    • wrote a tiny latency simulator: added random extra time to round-trip against db
    • even with 100ms max, latency diagram showed ~6% of the requests (top-to-bottom) took over 500ms to finish
    • now tweak it a bit more: have two dependencies, and one of them has a severe slowdown
    • now latency could go up to MINUTES
    • initial response: reach for bumping the timeouts
    • time for hystrix: introduce a circuit breaker into the system, to protect the system as a whole when an individual piece goes down
    • hystrix has an official cloure wrapper (!)
    • provides a macro: defcommand, wrap it around functions that will call out to dependencies
    • if it detects a long timeout, in the future, it will fail immediately, rather than waiting
    • as part of the macro, can also specify a fallback-fn, to be called when the circuit breaker is tripped
    • adding that in, the latency diagram is completely different. performance stays fast under much higher load
    • failback strategies:
      • fail fast
      • fail silently
      • send static content
      • use cached content
      • use stubbed content: infer the proper response, and send it back
      • chained fallbacks: a little more advanced, like connecting multiple circuit breakers in a row, in case one fails, the other can take over
    • hystrix dashboard: displays info on every defcommand you’ve got, tracks health, etc
    • seven takeaways
      • MUST embrace change in prod
      • MUST embrace failure: things are going to break, you might as well prepare for it
      • graphql is just part of the equation, if your resolvers get too complex, can introduce hystrix and push the complexity into other service
      • monitor at the function level (via hystrix dashboard)
      • adopt a consumer-driven mindset: the users have the money, don’t leave their money on the table by giving them a bad experience
      • force yourself to think about fallbacks
      • start thinking about the whole product: production issues LOOK to users like production features
    • question: do circuit-breakers introduce latency?
      • answer: a little bit at the upper end, once it’s been tripped
    The Tensors Must Flow
    • works at magento, lives in philly
    • really wants to be sure our future robot masters are written in clojure, not python
    • guildsman: tensorflow library for clojure
    • tensorflow: ML lib from google, recently got a c api so other languages can call into it
    • spoiler alert: don’t get TOO excited. everything’s still a bit of a mess
    • but it DOES work, promise
    • note on architecture: the python client (from google) has access to a “cheater” api that isn’t part of the open c api. thus, there’s some things it can do that guildsman can’t because the api isn’t there
    • also: ye gods, there’s a lot of python in the python client. harder to port everything over to guildsman than he thought
    • very recently, tensorflow started shipping with a java layer built on top of a c++ lib (via jni), which itself sits on top of the tensorflow c api, some people have started building on top of that
    • but not guildsman: it sits diretly on the c api
    • in guildsman: put together a plan, then build it, and execute it
    • functions like guildsman/add produce plan maps, instead of executing things themselves
    • simple example: adding two numbers: just one line in guildsman
    • another simple example: have machine learn to solve | x - 2.0 | by finding the value of x that minimizes it
    • tensorflow gives you the tools to find minima/maxima: gradient descent, etc
    • gradient gap: guildsman can use either the clojure gradients, or the c++ ones, but not both at once
      • needs help to port the c++ ones over to clojure (please!)
    • “python occupies the 9th circle of dependency hell”: been using python lightly for years, and still has problems getting dependencies resolved (took a left turn at the virtual environment, started looking for my oculus rift)
    • demo: using mnist dataset, try to learn to recognize handwritten characters
    The Dawn of Lisp: How to Write Eval and Apply in Clojure
    • educator, started using scheme in 1994, picked up clojure in 2009
    • origins of lisp: john mccarthy’s paper: recursive functions of symbolic expressions and their computation by machine, part i
    • implementation of the ideas of alonzo church, from his book “the calculi of lambda-conversion”
    • “you can always tell the lisp programmers, they have pockets full of punch cards with lots of closing parenthses on them”
    • steve russel (one of the creators of spaceware) was the first to actually implement the description from mccarthy’s paper
    • 1962: lisp 1.5 programmer’s manual, included section on how to define lisp in terms of itself (section 1.6: a universal lisp function)
    • alan kay described this definition (of lisp in terms of lisp) as the maxwell equations of software
    • how eval and apply work in clojure:
      • eval: send it a quoted list (data structure, which is also lisp), eval will produce the result from evaluating that list
        • ex: (eval '(+ 2 2)) => 4
      • apply: takes a function and a quoted list, applies that function to the list, then returns the result
        • ex: (apply + '(2 2)) => 4
    • rules for converting the lisp 1.5 spec to clojure
      • convert all m-expression to s-expressions
      • keep the definitions as close to original as possible
      • drop the use of dotted pairs
      • give all global identifiers a ‘$’ prefix (not really the way clojure says it should be used, but helps the conversion)
      • add whitespace for legibility
    • m-expressions vs s-expressions:
      • F[1;2;3] becomes (F 1 2 3)
      • [X < 0 -> -X; T -> X] becomes (COND ((< X 0) (- X)) (T X))
    • dotted pairs
      • basically (CONS (QUOTE A) (QUOTE B))
    • definitions: $T -> true, $F -> false, $NIL, $cons, $atom, $eq, $null, $car, $cdr, $caar, $cdar, $caddr, $cadar
      • note: anything that cannot be divided is an atom, no relation to clojure atoms
      • last few: various combos of car and cdr for convenience
    • elaborate definitions:
      • $cond: own version of cond to keep syntax close to the original
      • $pairlis: accepts three args: two lists, and a list of existing pairs, combines the first two lists pairwise, and combines with the existing paired list
      • $assoc: lets you pull key-value pair out of an association list (list of paired lists)
      • $evcon: takes list of paired conditions and expressions, plus a context, will return the result of the expression for the first condition that evaluates to true
      • $evlist: takes list of expressions, with a condition, and a context, and then evalutes the result of the condition + the expression in a single list
      • $apply
      • $eval
    • live code demo
    INVITED TALK FROM GUY STEELE: It’s time for a New Old Language
    • “the most popular programming language in computer science”
    • no compiler, but lots of cross-translations
    • would say the name of the language, but doesn’t seem to have one
    • so: CSM (computer science metanotation)
    • has built-in datatypes, expressions, etc
    • it’s beautiful, but it’s getting messed up!
    • walk-throughs of examples, how to read it (drawn from recent ACM papers)
    • “isn’t it odd that language theorists wanting to talk about types, do it in an untyped language?”
    • wrote toy compiler to turn latex expressions of CSM from emacs buffer into prolog code, proved it can run (to do type checking)
    • inference rules: Gentzen Notation (notation for “natural deduction”)
    • BNF: can trace it all the way back to 4th century BCE, with Panini’s sanskrit grammar
    • regexes: took thirty years to settle on a notation (51–81), but basically hasn’t changed since 1981!
    • final form of BNF: not set down till 1996, though based on a paper from 1977
    • but even then, variants persist and continue to be used (especially in books)
    • variants haven’t been a problem, because they common pieces are easy enough to identify and read
    • modern BNF in current papers is very similar to classic BNF, but with 2 changes to make it more concise:
      • use single letters instead of meaningful phrases
      • use bar to indicate repetition instead of … or *
    • substitution notation: started with Church, has evolved and diversified over time
    • current favorite: e[v/x] to represent “replace x with v in e”
    • number in live use has continued to increase over time, instead of variants rising and falling (!)
    • bigger problem: some sub variants are also being used to mean function/map update, which is a completely different thing
    • theory: these changes are being driven by the page limits for computer science journals (all papers must fit within 10 years)
    • overline notation (dots and parentheses, used for grouping): can go back to 1484, when chuquet used underline to indicate grouping
      • 1702: leibnitz switched from overlines to parentheses for grouping, to help typesetters publishing his books
    • three notations duking it out for 300 years!
    • vectors: notation -> goes back to 1813, and jean-robert argand (for graphing complex numbers)
    • nested overline notation leads to confusion: how do we know how to expand the expressions that are nested?
    • one solution: use an escape from the defaults, when needed, like backtick and tilde notation in clojure
    • conclusions:
      • CMS is a completely valid language
      • should be a subject of study
      • has issues, but those can be fixed
      • would like to see a formal theory of the language, along with tooling for developing in it, checking it, etc
      • thinks there are opportunities for expressing parallelism in it
    Day Three

    Declarative Deep Learning in Clojure

    • starts with explanation of human cognition and memory
    • at-your-desk memory vs in-the-hammock memory
    • limitation of neural networks: once trained for a task, it can’t be retrained to another without losing the first
      • if you train a NN to recognize cats in photos, you can’t then ask it to analyze a time series
    • ART architecture: uses two layers, F1 and F2, the first to handle data that has been seen before, the second to “learn” on data that hasn’t been encountered before
    • LSTM-cell processing:
      • what should we forget?
      • what’s new that we care about?
      • what part of our updated state should we pass on?
    • dealing with the builder pattern in java: more declarative than sending a set of ordered args to a constructor
    • his lib allows keyword args to be passed in to the builder function, don’t have to worry about ordering or anything
    • by default, all functions produce a data structure that evaluates to a d4j object
    • live demos (but using pre-trained models, no live training, just evaluation)
    • what’s left?
      • graphs
      • front end
      • kafka support
      • reinforcement learning
    Learning Clojure Through Logo
    • disclaimer: personal views, not views of employers
    • logo: language to control a turtle, with a pen that can be up (no lines) or down (draws lines as it moves)
    • …technical difficulties, please stand by…
    • live demo of clojure/script version in the browser
    • turns out the logo is a lisp (!): function call is always in first position, give it all args, etc
    • even scratch is basically lisp-like
    • irony: we’re using lisp to teach kids how to program, but then they go off to work in the world of curly braces and semicolons
    • clojure-turtle lib: open-source implementation of the logo commands in clojure
    • more live demos
    • recommends reading seymour papert’s book: “Mindstorms: Children, Computers, and Powerful Ideas”
    • think clojure (with the power of clojurescript) is the best learning language
    • have a tutorial that introduces the turtle, logo syntax, moving the turtle, etc
    • slowly introduces more and more clojure-like syntax, function definitions, etc
    • fairly powerful environment: can add own buttons for repeatable steps, can add animations, etc
    • everything’s in the browser, so no tools to download, nothing to mess with
    • “explaining too early can hurt”: want to start with as few primitives as possible, make the intro slow
    • can create your own lessons in markdown files, can just append the url to the markdown file and it’ll load (!)
    • prefer that you send in the lessons to them, so they can put them in the lessons index for everyone to benefit
    • have even translated the commands over to multiple languages, so you don’t have to learn english before learning programming (!)
    • lib: cban, which has translations of clojure core, can be used to offer translations of your lib code into something other than english
    • clojurescript repls: Klipse (replaces all clojure code in your page with an interactive repl)
    • comments/suggestions/contributions welcome
    → 5:00 AM, Oct 17
  • Tubes by Andrew Blum

    A nice, quick intro to the physical infrastructure of the internet. Doesn’t really go into how all those pieces work – there’s no discourse on the technology behind a router – but does build a mental image of the boxes, buildings, and people that keep the world connected.

    Three things I learned:

    • ARPAnet's first Internet Message Processing machine was installed at UCLA in 1969. The machines were manufactured on the East Coast, but only West Coast universities were open to the idea of the network at the time.
    • In 1998, The Netherlands passed two laws to pave the way for fiber everywhere. One law required landowners to give up right of way for holes to be dug, second law required any company digging a hole to lay fiber to also let other companies lay their own cable in the same hole and share the costs. The one-two punch made it cheaper and easier to lay fiber, and also blocked anyone getting a monopoly.
    • The busiest route in the world is between London and New York, with more internet traffic than any other line.
    → 6:00 AM, Jul 10
  • Introducing elm-present

    I’m in love with Elm. No, really.

    I don’t know if it’s just that I’ve been away from front-end development for a few years, but working in Elm has been a breath of fresh air.

    When my boss offered to let us give Tech Talks on any subject at the last company meetup, I jumped at the chance to talk about Elm.

    And, of course, if I was going to give a talk about Elm, I had to make my slides in Elm, didn’t I?

    So I wrote elm-present.

    It’s a (very) simple presentation app. Slides are json files that have a title, some text, a background image, and that’s it. Each slide points to the one before it, and the one after, for navigation.

    elm-present handles reading in the files, parsing the json, and displaying everything (in the right order).

    And the best part? You don’t need a server to run it. Just push everything up to Dropbox, open the present.html file in your browser, and voilà!

    You can see the talk I gave the meetup here, as a demo.

    → 6:00 AM, Jul 5
  • Seven More Languages in Seven Weeks: Julia

    Julia feels…rough.

    There are parts I absolutely love, like the strong typing, the baked-in matrix operations, and the support for multi-type dispatch.

    Then there’s the pieces that seem incomplete. Like the documentation, which is very extensive, but proved useless when trying to find out the proper way to build dictionaries in the latest version. Or the package system, which will install things right into a running repl (cool!) but does it without getting your permission for all its dependencies (boo).

    All in all, I’d like to build something more extensive in Julia. Preferably something ML-related, that I might normally build in Python.

    Day One

    • can install with brew
    • book written for 0.3, newest version is 0.5
    • has a repl built in :)
    • typeof for types
    • "" for strings, '' only for single-chars
    • // when you want to divide and leave it divided (no float, keep the fraction)
    • has symbols
    • arrays are typed, but can contain more than one type (will switch from char to any, for example)
    • commas are required for lists, arrays, etc (boo)
    • tuples: fixed sized bags of values, typed according to what they hold
    • arrays carry around their dimensionality (will be important for matrix-type ops later on)
    • has dictionaries as well
    • hmm: typeof({:foo => 5}) -> vector syntax is discontinued
    • Dicts have to be explicitly built now: Dict(:foo => 5) is the equivalent
    • XOR operator with $
    • bits to see the binary of a value
    • can assign to multiple variables at once with commas (like python)
    • trying to access an undefined key in a dict throws an error
    • in can check membership of arrays or iterators, but not dictionaries
    • but: can check for key and value in dict using in + pair of key, value: in(:a => 1, explicit)
    • book's syntax of using tuple for the search is incorrect
    • julia docs are really...not helpful :/
    • book's syntax for set construction is also wrong
    • nothing in the online docs to correct it
    • (of course, nothing in the online docs to correct my Dict construction syntax, either)
    • can construct Set with: Set([1, 2, 3])
    • arrays are typed (Any for multiple types)
    • array indexes start at 1, not 0 (!) [follows math here]
    • array slices include the ending index
    • can mutate index by assigning to existing index, but assigning to non-existing index doesn't append to the array, throws error
    • array notation is row, column
    • * will do matrix multiplication (means # of rows of first has to match # of columns of second)
    • regular element-wise multiplication needs .*
    • need a transpose? just add '
    • very much like linear algebra; baked-in
    • dictionaries are typed, will throw error if you try to add key/value to them that doesn't match the types it was created with
    • BUT: can merge a dict with a dict with different types, creates a new dict with Any to hold the differing types (keys or values)

    Day Two

    • if..elseif...end
    • if check has to be a boolean; won't coerce strings, non-boolean values to booleans (nice)
    • reference vars inside of strings with $ prefix: println("$a")
    • has user-defined types
    • can add type constraints to user-defined type fields
    • automatically gets constructor fn with the same name as the type and arguments, one per field
    • subtype only one level
    • abstract types are just ways to group other types
    • no more super(), use supertype() -> suggested by compiler error message, which is nice
    • functions return value of last expression
    • ... to get a collection of args
    • +(1, 2) -> yields 3, operators can be used as prefix functions
    • ... will expand collection into arguments for a function
    • will dispatch function calls based on the types of all the arguments
    • type on pg 208: int() doesn't exist, it's Int()
    • WARNING: Base.ASCIIString is deprecated, use String instead.
    • no need to extend protocols or objects, classes, etc to add new functions for dispatching on core types: can just define the new functions, wherever you like, julia will dispatch appropriately
    • avoids problem with clojure defmulti's, where you have to bring in the parent lib all the time
    • julia has erlang-like processes and message-passing to handle concurrency
    • WARNING: remotecall(id::Integer,f::Function,args...) is deprecated, use remotecall(f,id::Integer,args...) instead.
    • (remotecall arg order has changed)
    • randbool -> NOPE, try rand(Bool)
    • looks like there's some overhead in using processes for the first time; pflip_coins times are double the non-parallel version at first, then are reliably twice as fast
    • julia founders answered the interview questions as one voice, with no distinction between them
    • whole section in the julia manual for parallel computing

    Day Three

    • macros are based off of lisp's (!)
    • quote with :
    • names fn no longer exists (for the Expr type, just fine for the Module type)
    • use fieldnames instead
    • unquote -> $
    • invoke macro with @ followed by the args
    • Pkg.add() will fetch directly into a running repl
    • hmm...installs homebrew without checking if it's on your system already, or if you have it somewhere else
    • also doesn't *ask* if it's ok to install homebrew
    • not cool, julia, not cool
    • even then, not all dependencies installed at the time...still needed QuartzIO to display an image
    • view not defined
    • ImageView.view -> deprecated
    • imgshow does nothing
    • docs don't help
    • hmm...restarting repl seems to have fixed it...window is hidden behind others
    • img no longer has data attribute, is just the pixels now
    • rounding errors means pixels != pixels2
    • ifloor -> floor(Int64, val) now
    • works!
    → 6:00 AM, Apr 5
  • Seven More Languages in Seven Weeks: Elixir

    So frustrating. I had high hopes going in that Elixir might be my next server-side language of choice. It’s built on the Erlang VM, after all, so concurrency should be a breeze. Ditto distributed applications and fault-tolerance. All supposedly wrapped in a more digestible syntax than Erlang provides.

    Boy, was I misled.

    The syntax seems to be heavily Ruby-influenced, in a bad way. There’s magic methods, black box behavior, and OOP-style features built in everywhere.

    The examples in this chapter go deeply into this Ruby-flavored world, and skip entirely over what I thought were the benefits to the language. If Elixir makes writing concurrent, distributed applications easier, I have no idea, because this book doesn’t bother working examples that highlight it.

    Instead, the impression I get is that this is a way to write Ruby in Erlang, an attempt to push OOP concepts into the functional programming world, resulting in a hideous language that I wouldn’t touch with a ten-foot-pole.

    I miss Elm.

    Day One

    • biggest influences: lisp, erlang, ruby
    • need to install erlang *and* elixir
    • both available via brew
    • syntax changing quickly, it's a young language
    • if do:
    • IO.puts for println
    • expressions in the repl always have a return value, even if it's just :ok
    • looks like it has symbols, too (but they're called atoms)
    • tuples: collections of fixed size
    • can use pattern matching to destructure tuples via assignment operator
    • doesn't allow mutable state, but can look like it, because compiler will rename vars and shuffle things around for you if you assign something to price (say) multiple times
    • weird: "pipes" |> for threading macros
    • dots and parens only needed for anonymous functions (which can still be assigned to a variable)
    • prints out a warning if you redefine a module, but lets you do it
    • pattern matching for multiple functions definition in a single module (will run the version of the function that matches the inputs)
    • can define one module's functions in terms of another's
    • can use when conditions in function def as guards to regulate under what inputs the function will get run
    • scripting via .exs files, can run with iex
    • put_in returns an updated copy of the map, it doesn't update the map in place
    • elixir's lists are linked lists, not arrays!
    • char lists are not strings: dear god
    • so: is_list "string" -> false, but is_list 'string' -> true (!)
    • wat
    • pipe to append to the head
    • when destructuring a list, the number of items on each side have to match (unless you use the magic pipe)
    • can use _ for matching arbitrary item
    • Enum for processing lists (running arbitrary functions on them in different ways, like mapping and reducing, filtering, etc)
    • for comprehensions: a lot like python's list comprehensions; takes a generator (basically ways to pull values from a list), an optional filter (filter which values from the list get used), and a function to run on the pulled values
    • elixir source is on github

    Day Two

    • mix is built in to elixir, installing the language installs the build tool (nice)
    • basic project template includes a gitignore, a readme, and test files
    • source files go in lib, not src
    • struct: map with fixed set of fields, that you can add behavior to via functions...sounds like an object to me :/
    • iex -S mix to start iex with modules from your project
    • will throw compiler errors for unknown keys, which is nice, i guess?
    • since built on the erlang vm, but not erlang, we can use macros, which get expanded at compile time (presumably, to erlang code)
    • should is...well...kind of a silly macro
    • __using__ just to avoid a fully-qualified call seems...gross...and too implicit
    • and we've got to define new macros to override compile-time behavior? i...i can't watch
    • module attributes -> compile-time variables -> object attributes by another name?
    • use, __using__, @before_compile -> magic, magic everywhere, so gross
    • state machine's "beautiful syntax" seems more like obscure indirection to me
    • can elixir make me hate macros?
    • whole thing seems like...a bad example. as if the person writing it is trying to duplicate OOP-style inheritance inside a functional language.
    • elixir-pipes example from the endnotes (github project) is much better at showing the motivation and usage of real macros

    Day Three

    • creator's main language was Ruby...and it shows :/
    • spawn returns the process id of the underlying erlang process
    • pattern matching applies to what to do with the messages a process receives via its inbox
    • can write the code handling the inbox messages *after* the messages are sent (!)
    • task -> like future in clojure, can send work off to be done in another process, then later wait for the return value
    • use of Erlang's OTP built into Elixir's library
    • construct the thing with start_link, but send it messages via GenServer...more indirection
    • hmm...claims it's a "fully distributed server", but all i see are functions getting called that return values, no client-server relationship here?
    • final example: cast works fine, but call is broken (says process not alive; same message regardless of what command sent in (:rent, :return, etc)
    • oddly enough, it works *until* we make the changes to have the supervisor run everything for us behind the scenes ("like magic!")
    • endnotes say we learned about protocols, but they were mentioned only once, in day two, as something we should look up on our own :/
    • would have been nicer to actually *use* the concurrency features of language, to, idk, maybe use all the cores on your laptop to run a map/reduce job?
    → 7:00 AM, Mar 1
  • The Problem with Programmer Interviews

    You’re a nurse. You go in to interview for a new job at a hospital. You’re nervous, but confident you’ll get the job: you’ve got ten years of experience, and a glowing recommendation from your last hospital.

    You get to the interview room. There must be a mistake, though. The room number they gave you is an operating room.

    You go in anyway. The interviewer greets you, clipboard in hand. He tells you to scrub up, join the operation in progress.

    “But I don’t know anything about this patient,” you say. “Or this hospital.”

    They wave away your worries. “You’re a nurse, aren’t you? Get in there and prove it.”

    ….

    You’re a therapist. You’ve spent years counseling couples, helping them come to grips with the flaws in their relationship.

    You arrive for your interview with a new practice. They shake your hand, then take you into a room where two men are screaming at each other. Without introducing you, the interviewer pushes you forward.

    “Fix them,” he whispers.

    …

    You’re a pilot, trying to get a better job at a rival airline. When you arrive at your interview, they whisk you onto a transatlantic flight and sit you in the captain’s chair.

    “Fly us there,” they say.

    …

    You’re a software engineer. You’ve been doing it for ten years. You’ve seen tech fads come and go. You’ve worked for tiny startups, big companies, and everything in-between. Your last gig got acquired, which is why you’re looking for a new challenge.

    The interviewers – there’s three of them, which makes you nervous – smile and shake your hand. After introducing themselves, they wave at the whiteboard behind you.

    “Code for us.”

     

    → 7:00 AM, Nov 23
  • Seven More Languages in Seven Weeks: Factor

    Continuing on to the next language in the book: Factor.

    Factor is…strange, and often frustrating. Where Lua felt simple and easy, Factor feels simple but hard.

    Its concatenative syntax looks clean, just a list of words written out in order, but reading it requires you to keep a mental stack in your head at all times, so you can predict what the code does.

    Here’s what I learned:

    Day One

    • not functions, words
    • pull and push onto the stack
    • no operator precedence, the math words are applied in order like everything else
    • whitespace is significant
    • not anonymous functions: quotations
    • `if` needs quotations as the true and false branches
    • data pushed onto stack can become "out of reach" when more data gets pushed onto it (ex: store a string, and then a number, the number is all you can reach)
    • the `.` word becomes critical, then, for seeing the result of operations without pushing new values on the stack
    • also have shuffle words for just this purpose (manipulating the stack)
    • help documentation crashes; no listing online for how to get word docs in listener (plenty for vocab help, but that doesn't help me)
    • factor is really hard to google for

    Day Two

    • word definitions must list how many values they take from the stack and how many they put back
    • names in those definitions are not args, since they are arbitrary (not used in the word code itself)
    • named global vars: symbols (have get and set; aka getters and setters)
    • standalone code imports NOTHING, have to pull in all needed vocabularies by hand
    • really, really hate the factor documentation
    • for example, claims strings implement the sequence protocol, but that's not exactly true...can't use "suffix" on a string, for example

    Day Three

    • not maps, TUPLES
    • auto-magically created getters and setters for all
    • often just use f for an empty value
    • is nice to be able to just write out lists of functions and not have to worry about explicit names for their arguments all over the place
    • floats can be an issue in tests without explicit casting (no types for functions, just values from the stack)
    • lots of example projects (games, etc) in the extra/ folder of the factor install
    → 6:00 AM, Oct 12
  • Seven More Languages in Seven Weeks: Lua

    Realized I haven’t learned any new programming languages in a while, so I picked up a copy of Seven More Languages in Seven Weeks.

    Each chapter covers a different language. They’re broken up into ‘Days’, with each day’s exercises digging deeper into the language.

    Here’s what I learned about the first language in the book, Lua:

    Day One

    Just a dip into basic syntax.
    • table based
    • embeddable
    • whitespace doesn't matter
    • no integers, only floating-point (!)
    • comparison operators will not coerce their arguments, so you can't do =42 < '43'
    • functions are first class
    • has tail-call-optimization (!)
    • extra args are ignored
    • omitted args just get nil
    • variables are global by default (!)
    • can use anything as key in table, including functions
    • array indexes start at 1 (!)

    Day Two

    Multithreading and OOP.
    • no multithreading, no threads at all
    • coroutines will only ever run on one core, so have to handle blocking and unblocking them manually
    • explicit over implicit, i guess?
    • since can use functions as values in tables, can build entire OO system from scratch using (self) passed in as first value to those functions
    • coroutines can also get you memoization, since yielding means the state of the fn is saved and resumed later
    • modules: can choose what gets exported, via another table at the bottom

    Day Three

    A very cool project -- build a midi player in Lua with C++ interop -- that was incredibly frustrating to get working. Nothing in the chapter was helpful. Learned more about C++ and Mac OS X audio than Lua.
    • had to add Homebrew's Lua include directory (/usr/local/Cellar/lua/5.2.4_3/include) into include_directories command in CMakeLists.txt file
    • when compiling play.cpp, linker couldn't find lua libs, so had to invoke the command by hand (after reading ld manual) with brew lua lib directory added to its search path via -L
    • basically, add this to CMakeFiles/play.dir/link.txt: -L /usr/local/Cellar/lua/5.2.4_3/lib -L /usr/local/Cellar/rtmidi/2.1.1/lib
    • adding those -L declarations will ensure make will find the right lib directories when doing its ld invocation (linking)
    • also had to go into the Audio Midi Setup utility and set the IAC Driver to device is online in order for any open ports to show up
    • AND then needed to be sure was running the Simplesynth application with the input set to the IAC Driver, to be able to hear the notes
    → 6:00 AM, Sep 21
  • Follow the Tweeting Bot

    I have a problem.

    No, not my fondness for singing 80s country in a bad twang during karaoke.

    I mean a real, nerd-world problem: I have too many books to read.

    I can’t leave any bookstore without buying at least one. For a good bookstore, I’ll walk out with half a dozen or more, balancing them in my arms, hoping none of them fall over.

    I get them home and try to squeeze them into my bookshelf of “books I have yet to read” (not to be confused with my “books I’ve read and need to donate” or “books I’ve read and will re-read someday when I have the time” shelves). That shelf is full, floor to ceiling.

    My list of books to read is already too long for me to remember them all. And that’s not counting the ones I have sitting in ebook format, waiting on my Kobo or iPhone for me to tap their cover art and dive in.

    Faced with so much reading material, so many good books waiting to be read, my question is this: What do I read next?

    I could pick based on mood. But that usually means me sitting in front of my physical books, picking out the one that grabs me. I could pick based on which ones I’ve bought most recently, which would probably narrow things down to just my ebooks.

    But I want to be able to choose from all of my books, physical and virtual, at any time.

    So I wrote a bot to help me.

    It listens to my twitter stream for instructions. When I give it the right command, it pulls down my to-read shelf from Goodreads (yes, I put all of my books, real and electronic, into Goodreads. yes, it took much longer than I thought it would), ranks them in order of which ones I should read first, and then tweets back to me the top 3.

    I’ve been following its recommendations for about a month now, and so far, it’s working. Footsteps in the Sky was great. Data and Goliath was eye-opening. The Aesthetic of Play changed the way I view art and games.

    Now, if only I could train it to order books for me automatically…

    → 6:00 AM, Apr 27
  • Notes from Strange Loop 2015: Day Two

    Pixie

    • lisp
    • own vm
    • compiled using RPython tool chain
    • RPython - reduced python
      • used in PyPy project
      • has own tracing JIT
    • runs on os x, linux, ARM (!)
    • very similar to clojure, but deviates from it where he wanted to for performance reasons
    • has continuations, called stacklets
    • has an open-ended object system; deftype, etc
    • also wanted good foreign function interface (FFI) for calling C functions
    • wants to be able to do :import Math.h :refer cosine
    • ended up writing template that can be called recursively to define everything you want to import
    • writes C file using template that has everything you need and then compiles it and then uses return values with the type info, etc
    • you can actually call python from pixie, as well (if you want)
    • not ready for production yet, but a fun project and PRs welcome

    History of Programming Languages for 2 Voices

    • David Nolen and Michael Bernstein
    • a programming language "mixtape"
    Big Bang: The World, The Universe, and The Network in the Programming Language
    • Matthias Felleisen
    • worst thought you can have: your kids are in middle school
    • word problems in math are not interesting, they're boring
    • can use image placement and substitution to create animations out of word problems
    • mistake to teach children programming per se. they should use programming to help their math, and math to help their programming. but no programming on its own
    • longitudinal study: understanding a function, even if you don't do any other programming ever, means a higher income as an adult
    • can design curriculum taking kids from middle school (programming + math) to high school (scheme), college (design programs), to graduate work (folding network into the language)
    → 9:00 AM, Oct 7
  • Notes from Strange Loop 2015: Day One

    Unconventional Programming with Chemical Computing

    • Carin Meier
    • Living Clojure
    • @Cognitect
    • Inspired by Book - unconventional programming paradigms
    • "the grass is computing"
      • all living things process information via chemical reactions on molecular level
      •  hormones
      • immune system
      • bacteria signal processing
    • will NOT be programming with chemicalsusing metaphor of molecules and reactions to do computing
      • nothing currently in the wild using chemical computing
    • at the heart of chemical programming: the reaction
    • will calculate primes two ways:
      • traditional
      • with prime reaction
    • uses clojure for the examples
    • prime reaction
      • think of the integers as molecules
      • simple rule: take a vector of 2 integers, divide them, if the mod is zero, return the result of the division, otherwise, return the vector unchanged
      • name of this procedure: gamma chemical programming
      • reaction is a condition + action
      • execute: replacement of original elements by resulting element
      • solution is known when it results in a steady state (hence, for prime reaction, have to churn over lists of integers multiple times to filter out all the non-primes)
    • possible advantages:
      • modeling probabilistic systems
      • drive a computation towards a global max or min
    • higher order
      • make the functions molecules as well
      • fn could "capture" integer molecules to use as args
      • what does it do?
      • it "hatches" => yields original fn and result of applying fn to the captured arguments
      • reducing reaction fn: return fewer arguments than is taken in
      • two fns interacting: allow to exchange captured values (leads to more "stirring" in the chem sims)
    • no real need for sequential processing; can do things in any order and still get the "right" answer
    • dining philosophers problem
      • something chemical programming handles well
      • two forks: eating philosopher
      • one fork or no forks: thinking philosopher
      • TP with 2fs reacting with EAT => EP
    • "self organizing": simple behaviors combine to create what look like complex behaviors
    • mail system: messages, servers, networks, mailboxes, membranes
      • membranes control reactions, keep molecules sorted
      • passage through membranes controlled by servers and network
      • "self organizing"

    How Machine Learning helps Cancer Research

    • evelina gabasova
    • university of cambridge
    • cost per human genome has gone down from $100mil (2001) to a few thousand dollars (methodology change in mid-2000s paid big dividends)
    • cancer is not a single disease; underlying cause is mutations in the genetic code that regulates protein formation inside the cell
    • brca1 and brca2 are guardians; they check the chromosomes for mistakes and kill cells that have them, so suppress tumor growth; when they stop working correctly or get mutated, you can have tumors
    • clustering: finding groups in data that are more similar to each other than to other data points
      • example: clustering customers
      • but: clustering might vary based on the attributes chosen (or they way those attributes are lumped together)?
      • yes: but choose projection based on which ones give the most variance between data points
      • can use in cancer research by plotting genes and their expression and looking for grouping
    • want to be able to craft more targeted responses to the diagnosis of cancer based on the patient and how they will react
    • collaborative filtering
      • used in netflix recommendation engine
      • filling in cells in a matrix
      • compute as the product of two smaller matrices
      • in cancer research, can help because the number of people with certain mutations is small, leading to a sparsely populated database
    • theorem proving
      • basically prolog-style programming, constraints plus relations leading to single (or multiple) solutions
      • can use to model cancer systems
      • was used to show that chronic myeloid leukemia is a very stable system, that just knocking out one part will not be enough to kill the bad cell and slow the disease; helps with drug and treatment design
      • data taken from academic papers reporting the results of different treatments on different populations
    • machine learning not just for targeted ads or algorithmic trading
    • will become more important in the future as more and more data becomes available
    • Q: how long does the calculation take for stabilization sims?
      • A: for very simple systems, can take milliseconds
    • Q: how much discovery is involved, to find the data?
      • A: actually, whole teams developing text mining techniques for extracting data from academic papers (!)

    When Worst is Best

    • Peter Bailis
    • what if we designed computer systems for the worst-case scenarios?
    • website that served 7.3Billion simultaneous users; would on average have lots of idle resources
    • hardware: what if we built this chip for the mars rover? would lead to very expensive packaging (and a lot of R&D to handle low-power low-weight environments)
    • security: all our devs are malicious; makes code deployment harder
    • designing for the worst case often penalizes the average case
    • could we break the curve? design for the worst case and improve the average case too
    • distributed systems
      • almost everything non-trivial is distributed these days
      • operate over a network
      • networks make designs hard
        • packets can be delayed
        • packets may be dropped
      • async network: can't tell if message has been delayed or dropped
        • handle this by adding replicas that can respond to any request at any time
        • network interruptions don't stop service
    • no coordination means even when everything is fine, we don't have to talk
      • possible infinite service scale-out
    • coordinated multi-server transactions pay large penalty as we add more servers (from locks); get more throughput if we let access be uncoordinated
    • don't care about latency if you don't have to send messages everywhere
    • but what about the CAP theorem?
      • inktomi from eric brewer: for large scale services, have to trade off between always giving an answer and always giving the right answer
      • takeaway: certain properties of a system (like serializability) require unavailability
      • original paper: cathy lynch
      • common conclusion: availability is too expensive, and we have to give up too much, and it only matters during failures, so forget about it
    • if you use worst case as design tool, you skew toward coordination-avoiding databases
      • high coordination is legacy of old db design
      • coordination-free designs are possible
    • example: read committed isolation
      • goal: never read uncommitted data
      • legacy implementation: lock records during access (coordination)
      • one way: copy on write (x -> x', do stuff -> write back to x)
      • or: versioning
      • for more detail, see martin's talk on saturday about transactions
    • research on coordination-free systems have potential for huge speedups
    • other situations where worst-case thinking yields good results
      • replication for fault tolerance can also increase your request-serving capacity
      • fail-over can help deployments/upgrades: if it's automatic, you can shut off the primary whenever you want and know that the backups will take over, then bring the primary back up when your work is done
      • tail latency in services:
        • avg of 1.2ms (not bad) can mean 0.1% of requests have 100ms (which is terrible)
        • if you're one of many services being used to fulfill a front-end request, your worst case is more likely to happen, and so drag down the avg latency for the end-user
    • universal design: designing well for everyone; ex: curb cuts, subtitles on netflix
    • sometimes best is brittle: global maximum can sit on top of a very narrow peak, where any little change in the inputs can drive it away from the optimum
    • defining normal defines our designs; considering a different edge case as normal can open up new design spaces
    • hardware: what happens if we have bit flips?
    • clusters: what's our scale-out strategy?
    • security: how do we audit data access?
    • examine your biases

    All In with Determinism for Performance and Testing in Distributed Systems

    • John Hugg
    • VoltDB
    • so you need a replicated setup?
      • could run primary and secondary
      • could allow writes to 2 servers, do conflict detection, and merge all writes
      • NOPE
    • active-active: state a + deterministic op = state b
      • if do same ops across all servers, should end up with the same state
      • have client that sends A B C to coordination system, which then ends ABC to all replicas, which do the ops in order
      • ABC: a logical log, the ordering is what's important
      • can write log to disk, for later replay
      • can replicate log to all servers, for constant active-active updates
      • can also send log across network for cluster replication
    • look out for non-determinism
      • random numbers
      • wall-clock time
      • record order
      • external systems (ping noaa for weather)
      • bad memory
      • libraries that use randomness for security
    • how to protect from non-determinism?
      • make sure sql is as deterministic as possible
      • 100% of their DML is deterministic
      • rw transactions are hard to make deterministic, have to do a little more planning (swap row-scan for tree-index scan)
      • use seeded random-number generators that are lists created in advance
      • hash up the write ops, and require replicas to send back their computed hashes once the ops are done so the coordinator can confirm the ops were deterministic
      • can also hash the whole replica state when doing a transactional snapshot
      • reduce latency by sending condensed representation of ops instead of all the steps (the recipe name, not the recipe)
    • why do it?
      • replicate faster, reduces concerns for latency
      • persist everything faster: start logging when the work is requested, not when the work is completed
      • bounded sizes: the work comes in as fast as the network allows, so the log will only be written no faster than the network (no firehose)
    • trade-offs?
      • it's more work: testing, enforcing determinism
      • running mixed versions is scary: if you fix a bug, and you're running different versions of the software between the replicas, you no longer have deterministic transactions
      • if you trip the safety checks, we shut down the cluster
    • testing?
      • multi-pronged approach: acid, sql correctness, etc
      • simulation a la foundationDB not as useful for them, since they have more states
      • message/state-machine fuzzing
      • unit tests
      • smoke tests
      • self-checking workload (best value)
        • everything written gets self-checked; so to check a read value, write it back out and see if it comes back unchanged
      • use "nefarious app": application that runs a lot of nasty transactions, checks for ACID failures
      • nasty transactions:
        • read values, hash them, write them back
        • add huge blobs to rows to slow down processing
        • add mayhem threads that run ad-hoc sql doing updates
        • multi-table joins
          • read and write multiple values
        • do it all many many times within the same transaction
      • mix up all different kinds of environment tweaks
      • different jvms
      • different VM hosts
      • different OSes
      • inject latency, disk faults, etc
    • client knows last sent and last acknowledged transaction, checker can be sure recovered data (shut down and restart) contains all the acknowledged transactions

    Scaling Stateful Services

    • Caitie MacCaffrey
    • been using stateless services for a long time, depending on db to store and coordinate our state
    • has worked for a long time, but got to place where one db wasn't enough, so we went to no-sql and sharded dbs
    • data shipping paradigm: client makes request, service fetches data, sends data to client, throws away "stale" data
    • will talk about stateful services, and their benefits, but WARNING: NOT A MAGIC BULLET
    • data locality: keep the fetched data on the service machine
      • lower latency
      • good for data intensive ops where client needs quick responses to operations on large amounts of data
    • sticky connections and consistency
      • using sticky connections and stateful services gives you more consistency models to use: pipelined random access memory, read your write, etc
    • blog post from werner vogel: eventual consistency revisited
    • building sticky connections
      • client connecting to a cluster always gets routed to the same server
    • easiest way: persistent connections
      • but: no stickiness once connection breaks
      • also: mucks with your load balancing (connections might not all last the same amount of time, can end up with one machine holding everything)
      • will need backpressure on the machines so they can break connections when they need to
    • next easiest: routing logic in cluster
      • but: how do you know who's in the cluster?
      • and: how do you ensure the work is evenly distributed?
      • static cluster membership: dumbest thing that might work; not very fault tolerant; painful to expand;
      • next better: dynamic cluster membership
        • gossip protocols: machines chat about who is alive and dead, each machine on its own decides who's in the cluster and who's not; works so long as system is relatively stable, but can lead to split-brain pretty quickly
        • consensus systems: better consistency; but if the consensus truth holder goes down, the whole cluster goes down
    • work distribution: random placement
      • write anywhere
      • read from everywhere
      • not sticky connection, but stateful service
    • work distribution: consistent hashing
      • deterministic request placement
      • nodes in cluster get placed on a ring, request gets mapped to spot in the ring
      • can still have hot spots form, since different requests will have different work that needs to be done, can have a lot of heavy work requests placed on one node
      • work around the hot spots by having larger cluster, but that's more expensive
    • work distribution: distributed hash table
      • non-deterministic placement
    • stateful services in the real world
    • scuba:
      • in-memory db from facebook
      • believe to be static cluster membership
      • random fan-out on write
      • reads from every machine in cluster
      • results get composed by machine running query
      • results include a completeness metric
    • uber ringpop
      • nodejs library that does application-layer sharding for their dispatching services
      • swim gossip protocol for cluster membership
      • consistent hashing for work distribution
    • orleans
      • from Microsoft Research
      • used for Halo4
      • runtime and programming model for building distributed systems based on Actor Model
      • gossip protocol for cluster membership
      • consistent hashing + distributed hash table for work distribution
      • actors can take request and:
        • update their state
        • return their state
        • create a new Actor
      • request comes in to any machine in cluster, it applies hash to find where the DHT is for that client, then that DHT machine routes the request to the right Actor
      • if a machine fails, the DHT is updated to point new requests to a different Actor
      • can also update the DHT if it detects a hot machine
    • cautions
      • unbounded data structures (huge requests, clients asking for too much data, having to hold a lot of things in memory, etc)
      • memory management (get ready to make friends with the garbage collector profiler)
      • reloading state: recovering from crashes, deploying a new node, the very first connection of a session (no data, have to fetch it all)
      • sometimes can get away with lazy loading, because even if the first connection fails, you know the client's going to come back and ask for the same data anyway
      • fast restarts at facebook: with lots of data in memory, shutting down your process and restarting causes a long wait time for the data to come back up; had success decoupling memory lifetime from process lifetime, would write data to shared memory before shutting process down and then bring new process up and copy over the data from shared to the process' memory
    • should i read papers? YES!
    → 9:00 AM, Oct 5
  • How to Read Any Online Magazine on a Kobo eReader

    I’ve been trying to read various magazines – for example, The Economist – on some form of eReader for a few years now.

    At first I couldn’t do it because they didn’t have electronic editions. Then they did, but only online. Then they offered electronic versions you could subscribe to, but only for Apple products.

    Now I can find a lot of them in online bookstores – for Barnes and Noble, or Kobo – but the subscriptions only let you read them on each bookstore’s tablets.

    But there’s a workaround for the Kobo eReaders that I wanted to share.

    It takes advantage of Pocket, which lets you save web articles for later reading. Turns out that Pocket is integrated into all of Kobo’s eReaders, so any articles you save to your Pocket account will show up on your Kobo.

    Here’s how you can read any magazine or newspaper that has an online version on your eReader:

    1. Sign up for a Pocket account.
    2. Download and install the Pocket plugin for your web browser.
    3. Go to the homepage of the magazine you want to read (e.g., economist.com)
    4. Subscribe to the magazine (if you haven't already).
    5. With your subscription, navigate to the "print edition" version of the website.
    6. Now you can start saving articles for reading. Either right-click on the link to the article and select "Save to Pocket" or open the article and click the "Save to Pocket" icon in your browser's toolbar.
    7. Wait for the popup that tells you the article has been saved to Pocket.
    8. Go to your ereader. Navigate to the "Articles from Pocket" section.
    9. Sign in to Pocket if you haven't already.
    10. Your saved article(s) should sync to the ereader. Tap any one of them to read it!
    → 8:00 AM, Jul 1
  • Review: Kobo Glo HD

    I’ve had two generations of Nook ereaders. I liked holding them better than the Kindles that were available, I wanted to feel good about buying ebooks after browsing at my local Barnes & Noble, and I didn’t like the way Amazon was waging war against book publishers (and, as a consequence, on authors).

    But B&N hasn’t updated their Nook in almost two years. Their last Nook’s screen resolution is good, but still not as good as a printed book. It has an annoying habit of setting the margins so wide that the text forms a three-word column down the screen, and then locks me out of making any adjustments. It doesn’t sync the last page read between the ereader and my iPhone. Its illumination is noticeably uneven. The covers for it are terrible and expensive, so when I travel I put it back inside the box it came in. I have to re-adjust the fonts and margins everytime I open (or re-open) a book, because it doesn’t remember my settings.

    None of which are show-stoppers, for sure, but over time they add up. The final straw was when B&N locked users out of downloading our ebooks to our computers. I used to do this on a regular basis, so I could save backups of the books to Dropbox. That changed a few months ago, when they took down the download link next to all the books in their users' Nook online libraries.

    So I went shopping for a new ereader. I worried that I might have to go with a Kindle, since they seemed to have the best screen resolution out there.

    Then I heard about the Kobo Glo HD. I knew Kobo already, since they stepped into the breech left behind by Google dropping its ebook partnership with independent bookstores. I knew they produced ereaders, since I’d seen them for sale at Mysterious Galaxy (as part of their collaboration with Kobo). The reviews I found of them were generally positive, and the Glo HD - which hadn’t come out yet - promised a screen resolution as good as the Kindle, and at a cheaper price.

    I couldn’t find one locally to try out, so I took the plunge and ordered it. I’m very, very glad I did; I’ve been using it for a month now, and I can honestly say this is the ereader I’ve been waiting for.

    The screen resolution is sharp enough that it looks like a printed book when I set it down on a table to read. And unlike the Nook’s dark screen, the Glo HD’s is bright enough that I don’t feel the need to turn on the reading light during the day.

    Syncing? My bookmarks sync between the Kobo app on my phone and the ereader no problem, easy as pie, even for books that I didn’t buy from Kobo.

    Sideloading was a little more complicated than I’d like. I had to use Adobe Digital Editions to connect to the reader and transfer books over, but it moved all 142 of my backed up B&N books without a hitch, and they all showed up in my Library on the Glo just fine.

    I still have to adjust the fonts sometimes between books, but I no longer care. I don’t care because the options for tweaking are incredible: I’ve got a dozen different fonts, sliders for font size, line spacing, and margins, as well as the ability to set justification to full, left, or simply off. And I’ve yet to encounter a book that locks me into reading a certain way. I’ve got full control over how the book looks, and it’s about freaking time.

    Even the case they sell for it is amazing. Its the first case for any portable device – Nook, iPad – that actually makes the original device better. It doesn’t add to the Glo’s weight, closing it puts the reader to sleep and opening it wakes it up, and when its open it folds back behind the reader to make it feel even more like a book in your hand. Oh, and it kept the screen scratch-free in my backpack over four cross-country flights.

    So this is one gamble that’s completely paid off. It’s the first ereader that I prefer reading on to a paper book, so much so that I have to stop myself from buying ebook versions of the hardcovers on my bookshelf just so that I can read them on the Glo HD.

    → 8:30 AM, Jun 8
  • Notes from LambdaConf 2015

    Haskell and Power Series Brought to Life

    • not interested in convergence
    • laziness lets you handle infinite series
    • head/tail great for describing series
    • operator overloading lets you redefine things to work on a power series (list of Nums) as well as Nums
    • multiplication complication: can't multiply power series by a scalar, since they're not the same type
    • could define negation as: negate = map negate
      • instead of recursively: negate(x:xs) = negate x : negate xs
    • once we define the product of two power series, we get integer powers for free, since it's defined in terms of the product
    • by using haskell's head-tail notation, we can clear a forest of subscripts from our proofs
    • reversion, or functional inversion, can be written as one line in haskell when you take this approach:
      • revert (0:fs) = rs where rs = 0 : 1/(fs#rs)
    • can define integral and derivative in terms of zipWith over a power series
    • once we have integrals and derivatives, we can solve differential equations
    • can use to express generating functions, which lets us do things like pascal's triangle
    • can change the default ordering of type use for constants in haskell to get rationals out of the formulas instead of floats
      • default (Integer, Rational, Double)
    • all formulas can be found on web page: ???
      • somewhere on dartmouth's site
    • why not make a data type? why overload lists?
      • would have needed to define Input and Ouput for the new data type
      • but: for complex numbers, algebraic extensions, would need to define your own types to keep everything straight
      • also: looks prettier this way

    How to Learn Haskell in Less than 5 Years

    • Chris Allen (bitemyapp)
    • title derives from how long it took him
      • though, he says he's not particularly smart
    • not steady progress; kept skimming off the surface like a stone
    • is this talk a waste of time?
      • not teaching haskell
      • not teaching how to teach haskell
      • not convince you to learn haskell
      • WILL talk about problems encountered as a learner
    • there is a happy ending: uses haskell in production very happily
    • eventually made it
      • mostly working through exercises and working on own projects
      • spent too much time bouncing between different resources
      • DOES NOT teach haskell like he learned it
    • been teaching haskell for two years now
      • was REALLY BAD at it
      • started teaching it because knew couldn't bring work on board unless could train up own coworkers
    • irc channel: #haskell-beginners
    • the guide: github.com/bitemyapp/learnhaskell
    • current recommendations: cis194 (spring '13) followed by NICTA course
    • don't start with the NICTA course; it'll drive you to depression
    • experienced haskellers often fetishize difficult materials that they didn't use to learn haskell
    • happy and productive user of haskell without understanding category theory
      • has no problem understanding advanced talks
      • totally not necessary to learn in order to understand haskell
      • perhaps for work on the frontiers of haskell
    • his materials are optimized around keeping people from dropping out
    • steers them away from popular materials because most of them are the worst ways to learn
    • "happy to work with any of the authors i've critized to help them improve their materials"
    • people need multiple examples per concept to really get it, from multiple angles, for both good and bad ways to do things
    • doesn't think haskell is really that difficult, but coming to it from other languages means you have to throw away most of what you already know
      • best to write haskell books for non-programmers
      • if you come to haskell from js, there's almost nothing applicable
    • i/o and monad in haskell aren't really related, but they're often introduced together
    • language is still evolving; lots of the materials from 90s are good but leave out a lot of new (and useful!) things
    • how to learn: can't just read, have to work
    • writing a book with Julie (?) @argumatronic that will teach haskell to non-programmers, should work for everyone else as well; will be very, very long (longer than Real World Haskell)
    • if onboarding new employee, would pair through tutorials for 2 weeks and then cut them loose
    • quit clojure because he and 4 other clojurians couldn't debug a 250 line ns

    Production Web App in Elm

    • app: web-based doc editor with offline capabilities: DreamWriter
    • wrote original version in GIMOJ: giant imperative mess of jquery
    • knew was in trouble when he broke paste; could no longer copy/paste text in the doc
    • in the midst of going through rewrite hell, saw the simple made easy talk by rich hickey
    • "simple is an objective notion" - rich hickey
      • measure of how intermingled the parts of a system are
    • easy is subjective, by contrast: just nearer to your current skillset
    • familiarity grows over time -- but complexity is forever
    • simpler code is more maintainable
    • so how do we do this?
      • stateless functions minimize interleaving
      • dependencies are clear (so long as no side effects)
      • creates chunks of simpleness throughout the program
      • easier to keep track of what's happening in your head
    • first rewrite: functional style in an imperative language (coffeescript)
      • fewer bugs
    • then react.js and flux came out, have a lot of the same principles, was able to use that to offload a lot of his rendering code
      • react uses virtual dom that gets passed around so you no longer touch the state of the real dom
    • got him curious: how far down the rabbit-hole could he go?
      • sometimes still got bugs due to mutated state (whether accidental on his part or from some third-party lib)
    • realized: been using discipline to do functional programming, instead of relying on invariants, which would be easier
    • over 200 languages compile to js (!)
    • how to decide?
    • deal-breakers
      • slow compiled js
      • poor interop with js libs (ex: lunar.js for notes)
      • unlikely to develop a community
    • js but less painful?
      • dart, typescript, coffeescript
      • was already using coffeescript, so not compelling
    • easily talks to js
      • elm, purescript, clojurescript
      • ruled out elm almost immediately because of rendering (!)
    • cljs
      • flourishing community
      • mutation allowed
      • trivial js interop
    • purescript
      • 100% immutability + type inference
      • js interop: just add type signature
      • functions cannot have side effects* (js interop means you can lie)
    • so, decision made: rewrite in purescript!
      • but: no react or flux equivalents in purescript (sad kitten)
    • but then: a new challenger: blazing fast html in eml (blog post)
      • react + flux style but even simpler and faster (benchmarked)
    • elm js interop: ports
      • client/server relationship, they only talk with data
      • pub/sub communication system
    • so, elm, hmm…
      • 100% immutability, type inference
      • js interop preserves immutability
      • time travelling debugger!!!
      • saves user inputs, can replay back and forth, edit the code and then replay with the same inputs, see the results
    • decision: rewrite in elm!
    • intermediate step of rewriting in functional coffeescript + react and flux was actually really helpful
      • could anticipate invariants
      • then translate those invariants over to the elm world
      • made the transition to elm easier
    • open-source: rtfledman/dreamwriter and dreamwriter-coffee on github
    • code for sidebar looks like templating language, but is actually real elm (dsl)
    • elm programs are built of signals, which are just values that change over time
    • only functions that have access to a given signal have any chance of affecting it (or messing things up)
    • so how was it?
      • SO AWESOME
      • ridiculous performance
      • since you can depend on the function always giving you the same result for the same arguments, you can CACHE ALL THE THINGS (called lazy in Elm)
      • language usability: readable error messages from the compiler (as in, paragraphs of descriptive text)
      • refactoring is THE MOST FUN THING
      • semantic versioning is guaranteed. for every package. enforced by the compiler. yes, really.
      • diff tool for comparing public api for a lib
      • no runtime exceptions EVER
    • Elm is now his favorite language
    • Elm is also the simplest (!)
    • elm-lang.org
    → 7:00 AM, Jun 3
  • Clojure/West 2015: Notes from Day Three

    Everything Will Flow

    • Zach Tellman, Factual
    • queues: didn't deal with directly in clojure until core.async
    • queues are everywhere: even software threads have queues for their execution, and correspond to hardware threads that have their own buffers (queues)
    • queueing theory: a lot of math, ignore most
    • performance modeling and design of computer systems: queueing theory in action
    • closed systems: when produce something, must wait for consumer to deal with it before we can produce something else
      • ex: repl, web browser
    • open systems: requests come in without regard for how fast the consumer is using them
      • adding consumers makes the open systems we build more robust
    • but: because we're often adding producers and consumers, our systems may respond well for a good while, but then suddenly fall over (can keep up better for longer, but when gets unstable, does so rapidly)
    • lesson: unbounded queues are fundamentally broken
    • three responses to too much incoming data:
      • drop: valid if new data overrides old data, or if don't care
      • reject: often the only choice for an application
      • pause (backpressure): often the only choice for a closed system, or sub-system (can't be sure that dropping or rejecting would be the right choice for the system as a whole)
      • this is why core.async has the puts buffer in front of their normal channel buffer
    • in fact, queues don't need buffer, so much as they need the puts and takes buffers; which is the default channel you get from core.async

    Clojure At Scale

    • Anthony Moocar, Walmart Labs
    • redis and cassandra plus clojure
    • 20 services, 70 lein projects, 50K lines of code
    • prefer component over global state
    → 7:00 AM, Apr 29
  • Clojure/West 2015: Notes from Day Two

    Data Science in Clojure

    • Soren Macbeth; yieldbot
    • yieldbot: similar to how adwords works, but not google and not on search results
    • 1 billion pageviews per week: lots of data
    • end up using almost all of the big data tools out there
    • EXCEPT HADOOP: no more hadoop for them
    • lots of machine learning
    • always used clojure, never had or used anything else
    • why clojure?
      • most of the large distributed processing systems run on the jvm
      • repl great for data exploration
      • no delta between prototyping and production code
    • cascalog: was great, enabled them to write hadoop code without hating it the whole time, but still grew to hate hadoop (running hadoop) over time
    • december: finally got rid of last hadoop job, now life is great
    • replaced with: storm
    • marceline: clojure dsl (open-source) on top of the trident java library
    • writing trident in clojure much better than using the java examples
    • flambo: clojure dsl on top of spark's java api
      • renamed, expanded version of climate corp's clj-spark

    Pattern Matching in Clojure

    • Sean Johnson; path.com
    • runs remote engineering team at Path
    • history of pattern matching
      • SNOBOL: 60s and 70s, pattern matching around strings
      • Prolog: 1972; unification at its core
      • lots of functional and pattern matching work in the 70s and 80s
      • 87: Erlang -> from prolog to telecoms; functional
      • 90s: standard ml, haskell...
      • clojure?
    • prolog: unification does spooky things
      • bound match unbound
      • unbound match bound
      • unbound match unbound
    • clojurific ways: core.logic, miniKanren, Learn Prolog Now
    • erlang: one way pattern matching: bound match unbound, unbound match bound
    • what about us? macros!
    • pattern matching all around us
      • destructuring is a mini pattern matching language
      • multimethods dispatch based on pattern matching
      • case: simple pattern matching macro
    • but: we have macros, we can use them to create the language that we want
    • core.match
    • dennis' library defun: macros all the way down: a macro that wraps the core.match macro
      • pattern matching macro for defining functions just like erlang
      • (defun say-hi (["Dennis"] "Hi Dennis!") ([:catty] "Morning, Catty!"))
      • can also use the :guard syntax from core.match in defining your functions' pattern matching
      • not in clojurescript yet...
    • but: how well does this work in practice?
      • falkland CMS, SEACAT -> incidental use
      • POSThere.io -> deliberate use (the sweet spot)
      • clj-json-ld, filter-map -> maximal use
    • does it hurt? ever?
    • limitations
      • guards only accept one argument, workaround with tuples
    • best practices
      • use to eliminate conditionals at the top of a function
      • use to eliminate nested conditionals
      • handle multiple function inputs (think map that might have different keys in it?)
      • recursive function pattern: one def for the start, one def for the work, one def for the finish
        • used all over erlang
        • not as explicit in idiomatic clojure
    → 7:00 AM, Apr 28
  • Clojure/West 2015: Notes from Day One

    Life of a Clojure Expression

    • John Hume, duelinmarkers.com (DRW trading)
    • a quick tour of clojure internals
    • giving the talk in org mode (!)
    • disclaimers: no expert, internals can change, excerpts have been mangled for readability
    • most code will be java, not clojure
    • (defn m [v] {:foo "bar" :baz v})
    • minor differences: calculated key, constant values, more than 8 key/value pairs
    • MapReader called from static array of IFns used to track macros; triggered by '{' character
    • PersistentArrayMap used for less than 8 objects in map
    • eval treats forms wrapped in (do..) as a special case
    • if form is non-def bit of code, eval will wrap it in a 0-arity function and invoke it
    • eval's macroexpand will turn our form into (def m (fn [v] {:foo "bar :baz v}))
    • checks for duplicate keys twice: once on read, once on analyze, since forms for keys might have been evaluated into duplicates
    • java class emitted at the end with name of our fn tacked on, like: class a_map$m
    • intelli-j will report a lot of unused methods in the java compiler code, but what's happening is the methods are getting invoked, but at load time via some asm method strings
    • no supported api for creating small maps with compile-time constant keys; array-map is slow and does a lot of work it doesn't need to do

    Clojure Parallelism: Beyond Futures

    • Leon Barrett, the climate corporation
    • climate corp: model weather and plants, give advice to farmers
    • wrote Claypoole, a parallelism library
    • map/reduce to compute average: might use future to shove computation of the average divisor (inverse of # of items) off at the beginning, then do the map work, then deref the future at the end
    • future -> future-call: sends fn-wrapped body to an Agent/soloExecutor
    • concurrency vs parallelism: concurrency means things could be re-ordered arbitrarily, parallelism means multiple things happen at once
    • thread pool: recycle a set number of threads to avoid constantly incurring the overhead of creating a new thread
    • agent thread pool: used for agents and futures; program will not exit while threads are there; lifetime of 60 sec
    • future limitations
      • tasks too small for the overhead
      • exceptions get wrapped in ExecutionException, so your try/catches won't work normally anymore
    • pmap: just a parallel map; lazy; runs N-cpu + 3 tasks in futures
      • generates threads as needed; could have problems if you're creating multiple pmaps at once
      • slow task can stall it, since it waits for the first task in the sequence to complete for each trip through
      • also wraps exceptions just like future
    • laziness and parallelism: don't mix
    • core.async
      • channels and coroutines
      • reads like go
      • fixed-size thread pool
      • handy when you've got a lot of callbacks in your code
      • mostly for concurrency, not parallelism
      • can use pipeline for some parallelism; it's like a pmap across a channel
      • exceptions can kill coroutines
    • claypoole
      • pmap that uses a fixed-size thread pool
      • with-shutdown! will clean up thread pool when done
      • eager by default
      • output is an eagerly streaming sequence
      • also get pfor (parallel for)
      • lazy versions are available; can be better for chaining (fast pmap into slow pmap would have speed mismatch with eagerness)
      • exceptions are re-thrown properly
      • no chunking worries
      • can have priorities on your tasks
    • reducers
      • uses fork/join pool
      • good for cpu-bound tasks
      • gives you a parallel reduce
    • tesser
      • distributable on hadoop
      • designed to max out cpu
      • gives parallel reduce as well (fold)
    • tools for working with parallelism:
      • promises to block the state of the world and check things
      • yorkit (?) for jvm profiling

    Boot Can Build It

    • Alan Dipert and Micha Niskin, adzerk
    • why a new build tool?
      • build tooling hasn't kept up with the complexity of deploys
      • especially for web applications
      • builds are processes, not specifications
      • most tools: maven, ant, oriented around configuration instead of programming
    • boot
      • many independent parts that do one thing well
      • composition left to the user
      • maven for dependency resolution
      • builds clojure and clojurescript
      • sample boot project has main method (they used java project for demo)
      • uses '--' for piping tasks together (instead of the real |)
      • filesets are generated and passed to a task, then output of task is gathered up and sent to the next task in the chain (like ring middleware)
    • boot has a repl
      • can do most boot tasks from the repl as well
      • can define new build tasks via deftask macro
      • (deftask build ...)
      • (boot (watch) (build))
    • make build script: (build.boot)
      • #!/usr/bin/env boot
      • write in the clojure code defining and using your boot tasks
      • if it's in build.boot, boot will find it on command line for help and automatically write the main fn for you
    • FileSet: immutable snapshot of the current files; passed to task, new one created and returned by that task to be given to the next one; task must call commit! to commit changes to it (a la git)
    • dealing with dependency hell (conflicting dependencies)
      • pods
      • isolated runtimes, with own dependencies
      • some things can't be passed between pods (such as the things clojure runtime creates for itself when it starts up)
      • example: define pod with env that uses clojure 1.5.1 as a dependency, can then run code inside that pod and it'll only see clojure 1.5.1

    One Binder to Rule Them All: Introduction to Trapperkeeper

    • Ruth Linehan and Nathaniel Smith; puppetlabs
    • back-end service engineers at puppetlabs
    • service framework for long-running applications
    • basis for all back-end services at puppetlabs
    • service framework:
      • code generalization
      • component reuse
      • state management
      • lifecycle
      • dependencies
    • why trapperkeeper?
      • influenced by clojure reloaded pattern
      • similar to component and jake
      • puppetlabs ships on-prem software
      • need something for users to configure, may not have any clojure experience
      • needs to be lightweight: don't want to ship jboss everywhere
    • features
      • turn on and off services via config
      • multiple web apps on a single web server
      • unified logging and config
      • simple config
    • existing services that can be used
      • config service: for parsing config files
      • web server service: easily add ring handler
      • nrepl service: for debugging
      • rpc server service: nathaniel wrote
    • demo app: github -> trapperkeeper-demo
    • anatomy of service
      • protocol: specifies the api contract that that service will have
      • can have any number of implementations of the contract
      • can choose between implementations at runtime
    • defservice: like defining a protocol implementation, one big series of defs of fns: (init [this context] (let ...)))
      • handle dependencies in defservice by vector after service name: [[:ConfigService get-in-config] [:MeowService meow]]
      • lifecycle of the service: what happens when initialized, started, stopped
      • don't have to implement every part of the lifecycle
    • config for the service: pulled from file
      • supports .json, .edn, .conf, .ini, .properties, .yaml
      • can specify single file or an entire directory on startup
      • they prefer .conf (HOCON)
      • have to use the config service to get the config values
      • bootstrap.cfg: the config file that controls which services get picked up and loaded into app
      • order is irrelevant: will be decided based on parsing of the dependencies
    • context: way for service to store and access state locally not globally
    • testing
      • should write code as plain clojure
      • pass in context/config as plain maps
      • trapperkeeper provides helper utilities for starting and stopping services via code
      • with-app-with-config macro: offers symbol to bind the app to, plus define config as a map, code will be executed with that app binding and that config
    • there's a lein template for trapperkeeper that stubs out working application with web server + test suite + repl
    • repl utils:
      • start, stop, inspect TK apps from the repl: (go); (stop)
      • don't need to restart whole jvm to see changes: (reset)
      • can print out the context: (:MeowService (context))
    • trapperkeeper-rpc
      • macro for generating RPC versions of existing trapperkeeper protocols
      • supports https
      • defremoteservice
      • with web server on one jvm and core logic on a different one, can scale them independently; can keep web server up even while swapping out or starting/stopping the core logic server
      • future: rpc over ssl websockets (using message-pack in transit for data transmission); metrics, function retrying; load balancing

    Domain-Specific Type Systems

    • Nathan Sorenson, sparkfund
    • you can type-check your dsls
    • libraries are often examples of dsls: not necessarily macros involved, but have opinionated way of working within a domain
    • many examples pulled from "How to Design Programs"
    • domain represented as data, interpreted as information
    • type structure: syntactic means of enforcing abstraction
    • abstraction is a map to help a user navigate a domain
      • audience is important: would give different map to pedestrian than to bus driver
    • can also think of abstraction as specification, as dictating what should be built or how many things should be built to be similar
    • showing inception to programmers is like showing jaws to a shark
    • fable: parent trap over complex analysis
    • moral: types are not data structures
    • static vs dynamic specs
      • static: types; things as they are at compile time; definitions and derivations
      • dynamic: things as they are at runtime; unit tests and integration tests; expressed as falsifiable conjectures
    • types not always about enforcing correctness, so much as describing abstractions
    • simon peyton jones: types are the UML of functional programming
    • valuable habit: think of the types involved when designing functions
    • spec-tacular: more structure for datomic schemas
      • from sparkfund
      • the type system they wanted for datomic
      • open source but not quite ready for public consumption just yet
      • datomic too flexible: attributes can be attached to any entity, relationships can happen between any two entities, no constraints
      • use specs to articulate the constraints
      • (defspec Lease [lesse :is-a Corp] [clauses :is-many String] [status :is-a Status])
      • (defenum Status ...)
      • wrote query language that's aware of the defined types
      • uses bi-directional type checking: github.com/takeoutweight/bidirectional
      • can write sensical error messages: Lease has no field 'lesee'
      • can pull type info from their type checker and feed it into core.typed and let core.typed check use of that data in other code (enforce types)
      • does handle recursive types
      • no polymorphism
    • resources
      • practical foundations for programming languages: robert harper
      • types and programming languages: benjamin c pierce
      • study haskell or ocaml; they've had a lot of time to work through the problems of types and type theory
    • they're using spec-tacular in production now, even using it to generate type docs that are useful for non-technical folks to refer to and discuss; but don't feel the code is at the point where other teams could pull it in and use it easily

    ClojureScript Update

    • David Nolen
    • ambly: cljs compiled for iOS
    • uses bonjour and webdav to target ios devices
    • creator already has app in app store that was written entirely in clojurescript
    • can connect to device and use repl to write directly on it (!)

    Clojure Update

    • Alex Miller
    • clojure 1.7 is at 1.7.0-beta1 -> final release approaching
    • transducers coming
    • define a transducer as a set of operations on a sequence/stream
      • (def xf (comp (filter? odd) (map inc) (take 5)))
    • then apply transducer to different streams
      • (into [] xf (range 1000))
      • (transduce xf + 0 (range 1000))
      • (sequence xf (range 1000))
    • reader conditionals
      • portable code across clj platforms
      • new extension: .cljc
      • use to select out different expressions based on platform (clj vs cljs)
      • #?(:clj (java.util.Date.) :cljs (js/Date.))
      • can fall through the conditionals and emit nothing (not nil, but literally don't emit anything to be read by the reader)
    • performance has also been a big focus
      • reduced class lookups for faster compile times
      • iterator-seq is now chunked
      • multimethod default value dispatch is now cached
    → 7:00 AM, Apr 27
  • Apple Woes

    For me, the real sign that Apple might be in trouble was when my wife upgraded her phone, and decided against getting a new iPhone.

    Understand, my wife’s the reason we’re an Apple family. She convinced me to try out a Mac way back in 1999, in the days of Bondi Blue iMacs and OS 9. The experience hooked me, but the seed planted was hers.

    16 years later, everything about Apple frustrates her. She couldn’t organize her photos on her iPhone, couldn’t even access them all without third-party software. Her last iPad update wiped out all the iMovie videos she’d created over the last six months. Apple Maps always led her astray, and Siri never helped.

    So she went Android for her last phone. That’s the Apple warning bell for me: my wife is Apple’s target market - smart but non-technical, creative and needing things to just work - and she doesn’t want what they’re selling anymore.

    → 8:00 AM, Jan 12
  • Trust is Critical to Building Software

    So much of software engineering is built on trust.

    I have to trust that the other engineers on my team will pull me back from the brink if i start to spend too much time chasing down a bug. I have to trust that they’ll catch the flaws in my code during code review, and show me how to do it better. When reviewing their code, at some point I have to trust that they’ve at least tested things locally, and written something that works, even if doesn’t work well.

    Beyond my team, I have to trust the marketing and sales folks to bring in new customers so we can grow the company. I’ve got to trust the customer support team to keep our current customers happy, and to report bugs they discover that I need to fix. I have to trust the product guys to know what features the customer wants next, so we don’t waste our time building things nobody needs.

    And every time I use test fixture someone else wrote, I’m trusting the engineers that worked here in the past. When I push new code, I’m trusting our CI builds to run the tests properly and catch anything that might have broken. By trusting those tests, I’m trusting everyone that wrote them, too.

    Every new line of code I write, every test I create, adds to that chain of trust, and brings me into it. As an engineer, I strive to be worthy of that trust, to build software that is a help, and not a burden, to those that rely on it.

    → 8:00 AM, Sep 15
  • Cranky Old Man talks about the new Apple Watch

    “It tracks your exercise!" “I don’t need a watch to tell me when I’ve gotten exercise. I’m well aware when it’s happening, because I’m the one doing it!”

    “It keeps accurate time!" “So does my alarm clock, my computer, my phone, and my car. When do I not have a clock staring me in the face, counting down my final hours?”

    “Friends lets you send a message with a single touch!" “All my friends are dead.”

    “It gets your attention with a tap! Isn’t that cute?" “A tap? From that whopper? It’d break my wrist!”

    “You can dictate messages to it!" “Sure, if you enunciate like a British MP. That’s all I need, to spend my day, sitting on a park bench, cursing at my wrist.”

    “You can read email on it!" “Maybe YOU can. With the fonts I use, it’d only display one word at a time!”

    “You can send sketches to people!" “Right. Just what the world needs, more shaky doodles from my arthritic hands.”

    “It can record your heartbeat!" “Now that might be useful. Can it send it to a doctor, or - no? Baldurdash.”

    “You can use it to pay for things!" “Like I couldn’t do it before? Listen, sonny, cash is still accepted everywhere.”

    → 7:00 AM, Sep 12
  • How To: Fix Blank Screen on the new Nook Glowlight

    I bought a new Nook Glowlight soon after it came out. I’m happy with it for the most part, but it has an annoying habit of going to a blank screen when I put it into sleep mode.

    When I wake the Nook from this blank screen, it looks like it displays the book I was reading correctly, but each tap of the screen advances the text, even if I tap on the bottom to try to get to the Settings menu. Eventually the Nook freezes completely.

    To fix this problem, I reboot the Nook when the blank screen first comes up. To do that, I hold the power button down for 20-30 seconds, until the blank screen is replaced by the booting up screen. I know it’s rebooting when I see the word “nook” printed in the middle, then “E-ink”, and finally the spiral of dots that means “loading”.

    Hope this helps other Nook Glowlight owners!

    → 3:44 PM, Dec 28
  • Software Engineering Lessons from NASA: Part One

    Long before I became a programmer, I worked as an optical engineer at NASA's Goddard Space Flight Center. I was part of the Optical Alignment and Test group, which is responsible for the integration and testing of science instruments for spacecraft like the Hubble Space Telescope, Swift, and the upcoming James Webb Space Telescope.

    While most of the day-to-day engineering practices I learned didn't transfer over to software engineering, I've found certain principles still hold true. Here's the first one:

    The first version you build will be wrong.

    At NASA, it was taken as axiomatic that the first time you built an instrument, you'd screw it up. Engineers always pressed for the budget to build an Engineering Test Unit (ETU), which was a fully-working mock-up of the instrument you ultimately wanted to launch. Since the ETU was thrown away afterward, you could use it to practice the techniques you'd use to assemble the real instrument.

    And you needed the practice. The requirements were often so tight (to the thousandth of an inch) that no matter how well you'd planned it out, something always went wrong: a screw would prove too hard to reach, or a baffle would be too thin to block all the light.

    No amount of planning and peer review could find all the problems. The only way to know for sure if it would work was to build it. By building an ETU, you could shake out all the potential flaws in the design, so the final instrument would be solid.

    In software, I've found the same principle holds: the first time I solve I problem, it's never the optimal solution. Oddly enough, I often can't see the optimal solution until after I've solved the problem in some other way.

    Knowing this, I treat the first solution as a learning experience, to be returned to down the line and further optimized. I still shoot for the best first solution I can get, but I don't beat myself up when I look back at the finished product and see the flaws in my design.

    → 10:56 AM, Dec 26
  • Five Reasons Your Company Should Allow Remote Developers

    If you don’t allow your software developers to work from home, you’re not just withholding a nice perk from your employees, you’re hurting your business.

    Here’s five reasons letting developers work remotely should be the default:

    1. Widen your talent pool

    Good software engineers are already hard to find. On top of that, you've got to hire someone skilled in your chosen tech stack, which narrows the pool even further. Why restrict yourself to those engineers within driving distance to your office? Can you really afford to wait for talent to move close to you?

    The CTO for my current gig lives an hour away from the office, and can’t move because of his wife’s job. We wouldn’t have been able to hire him if we didn’t let him work from home. Who are you missing out on because you won’t look at remote devs.

    2. Reclaim commute time

    The average commute time in the US is 30 minutes. That's an hour a day your in-office developers aren't working. If you let them work from home, they can start earlier and finish later.

    Don’t think they will? The evidence shows they do: people working from home work longer hours than people in the office.

    An extra hour of work a day is five more hours a week, or almost three more work days in the month. Why throw that time away?

    3. Reduce sick leave

    When I've got a cold or flu, I stay home. I usually spend the first day or two resting, but by the third day I'm usually able to work for a little, even though I'm too sick to go into the office.

    If my boss didn’t let me work from home, he wouldn’t get those “sick hours” from me. I’d be forced into taking more time off, getting less done.

    And when my wife’s sick, I don’t have to choose between taking care of her and getting my work done. By working from home, I can do both.

    4. Increase employee retention

    The only thing worse than not hiring a good dev is losing an engineer you've already got. When they leave, you're not only losing a resource, you're losing all the historical knowledge they have about the system: weird bugs that only show up once a quarter, why the team chose X db instead of another, etc.

    Since the competition for talent is fierce, you don’t want to lose out to another company. Letting developers work from home sends a clear signal to your employees that they’re valued and that you appreciate work-life balance. And with many companies sticking to the old “gotta be in the office” way of thinking, you’re ensuring that any offer your employees get from another company won’t be as attractive.

    5. Stay focused on work done

    Finally, letting developers work from home forces the entire team to focus on what's really important: getting work done. It doesn't matter how many hours a developer spends in the office; if that time isn't productive, it's wasted.

    What does matter is how much work a dev gets done: how many bugs squashed, how many new features completed, how many times they jump in to help another engineer. What you need from your developers is software for your business, not time spent in a cubicle. If they’re not producing, they should be let go. If they are producing, does it matter if they’re at a coffee shop downtown?

    And if you don’t have a way of measuring developer productivity beyond hours spent at their desk, get one. You can’t improve something you don’t measure, and team productivity should be high on your list of things to always be improving.

    → 1:00 PM, Mar 29
  • ClojureWest 2013: Day Three Notes

    Editing Clojure with Emacs: Ryan Neufeld

    • emacs live, emacs prelude, emacs-starting-kit: for getting up and running
    • bit.ly/vim-revisited - post about going back to basic editors
    • bit.ly/clj-ready: ryan's version of emacs starter kit
    • nREPL & nREPL.el instead of inferior-lisp
    • melpa for your package manager
    • have a look at the projectile minor mode for project management
    • use paredit to build up clojure code trees, not just characters
    • slurp and slice to add new structure: () then slurp to wrap

    Ritz: Missing Clojure Tooling: Hugo Duncan

    • clojure debugging tool
    • nREPL: lets you connect to remote clojure VM (transport + socket + protocol + REPL)
    • sits in-between your local nrepl and the remote nrepl process
    • available as marmalade package
    • also lein plugin: lein-ritz
    • turn on break on exceptions: gives you common-lisp like stacktrace + restart options (continue, ignore, etc)
    • can open up stacktrace to see local variables within the frame
    • can even evaluate expressions within the frame (!)
    • also pull up the source code for a frame (clojure or java)
    • can set breakpoints on lines of code in emacs; view list of all current breakpoints in a separate buffer
    • warning: lists appear in stacktrace frame fully-realized (will be fixed)

    Macros vs Monads: Chris Houser and Jonathan Claggett

    • synthread: library of macros that use Clojure's -> (threading macro) as the do
    • usually pulled in with -> as its alias
    • defines macros for most of clojure's control-flow code
    • also has updater macros
    • use for cases where pure functional code is too verbose and/or hard to read
    • replaces use of state monad
    • monads are infectious (end up passing them around everywhere)
    • clojure 1.5 has cond-> macro
    • rover example project up on github: lonocloud/synthread

    How to Sneak Clojure into Your Rails Shop: Joshua Bellanco

    • and improve your rails hosting along the way
    • chief scientist at burnside digital (based in portland)
    • if this was 2006, we'd be talking about sneaking rails into your java shop
    • learn from rails: better to ask forgiveness than permission, use clojure whenever it's the right tool for the job, don't be afraid to show off your successes
    • step 1: convince them to use jruby (better tooling, better performance, java ecosystem, plenty of hosting options)
    • step 2: torquebox for deployment (benefits of jboss)
    • step 3: go in with clojure and start using immutant overlay (gives jboss, torquebox, jruby); lein immutant run runs both ruby and clojure apps
    • step 4: openshift (open source heroku from redhat)
    • step 5: make clojure and ruby talk to each other (hornetq + torquebox/immutant messaging libraries)
    • step 6: show off!
    • example project: memolisa: sends alerts and confirmation when alert is read; rails for users + groups, clojure for the messaging

    clj-mook: Craig Brozefsky

    • library for testing web apps
    • persistent session, default request params, html parsing
    • motivated by shift from ring to immutant

    RxJava: Dave Ray

    • open-source implementation of M$ Reactive Extensions
    • operators for creating, composing, and manipulating Observable streams
    • sketch up at daveray/rx-clj
    • also: netflix/rxjava

    VMFest: Toni Batchelli

    • goal: make virtualbox behave like lightweight cloud provider
    • instantiate multiple vms from the same image
    • wrap the OOP api of virtualbox with a sane clojure one
    • has hardware dsl
    • one step further: little-fluffy-cloud -> RESTful interface for vmfest
    • visaje: build VMs from data

    shafty: Chris Meiklejohn

    • basho technologies: erlang and js, with some clojure
    • functional reactive programming library
    • based on flapjax, implemented in clojurescript
    • implementation of core API from flapjax
    • FRP: authoring reactive programs without callbacks
    • FRP: event streams and behaviors

    swearjure: gary fredericks

    • subset of clojure with no alphanumeric characters
    • exercise in extreme constraints
    • integers but no floats
    • true and false available

    Evolutionary Art with Clojure: Alan Shaw

    • clevolution: partial implementation of paper on genetic programming
    • converting s-expressions to images
    • using self as filtering function: delete the images he doesn't like
    • want to move to true evolutionary: use function to weed through the images, allow cross-breeding between s-expressions
    • clisk, clojinc, clevolution

    Simulation Testing with Simulant: Stuart Halloway

    • example-based tests: setup, inputs, outputs, validation
    • originally built as tool for testing datomic
    • no language indirection: it's about using clojure and datomic
    • write model for what will happen to the application, then writeup actions to take against the system
    • can speed up time in simulation, so don't have to wait for 6 mo for 6 months of results to go through
    • stores db with test results *and* the db tests ran against
    • can find and diagnose bugs by querying db after tests run
    • use clojure.data.generators to generate random test data (ints, vectors, maps)
    • github.com/stuarthalloway/presentations/wiki
    • contact at @stuarthalloway to have come out to speak at company or group
    • even working through the model of users and use is a good exercise; good way to validate the assumptions you're making about the system
    → 9:08 AM, Mar 21
  • ClojureWest 2013: Day Two Notes

    Winning the War on Javascript: Bodil Stokke

    • catnip: beginners clojure editor in browser
    • originally used coffeescript because cljs was immature and hard to work with js objects
    • currently working on converting catnip to cljs
    • error: another clojurescript testing framework, built around asynchronous testing

    PuppetDB: Sneaking Clojure into SysAdmins' Toolkits: Deepak Giridharaghopal

    • ops has a lot of entropy: spooky action at  distance: devs or admins logging in to one of many servers and mucking around without telling you
    • lack of predictability ruins automation and abstraction
    • problems with previous software in Ruby: not fast, only one core, mutable state everywhere, runtime compatibility a problem
    • solution: puppetdb in clojure for storing and querying data about systems
    • used CQRS: command query responsibility separation -> use different model to update then to read info
    • send commands to a /command endpoint, which queues the command for parsing and processing
    • build command processing functions as multi-methods switching off of the command and version sent
    • can also turn on live repl, to connect to running code and hack
    • queries have their own AST-based syntax; sent as json, built as vector tree
    • can ship the whole thing as a single uberjar, with built-in db, etc

    Securing Clojure Web Services & Apps with Friend: Chas Emerick

    • authentication & authorization (who are you? what are you allowed to do?)
    • options: spring-security (java, not recommended), sandbar, ring-basic-authentication, clj-oauth2
    • most common: roll your own
    • wrote friend to have a common auth framework
    • uses ad-hoc hierarchies for roles
    • add workflows to specify how to authenticate a request that doesn't have authentication yet
    • friend-demo.herokuapp.com for multiple demos with source code
    • recommend using b-crypt over sha

    FRP in ClojureScript with Javelin: Alan Dipert

    • event stream: sequence of values over time
    • behavior: function that updates according to changing values from event stream
    • reactive evaluation: holds off on evaluating until all values are available
    • similar to spreadsheet formula evaluation (!)
    • FRP maintains evaluation order despite lack of all values at once
    • current FRP in clojurescript: FlapJax
    • javelin: abstract spreadsheet library for reactive programming with values
    • everything contained in the "cell" macro
    • web app has single state at any point in time, contained in the "stem cell"
    • everything in app either in stem cell or derived from it

    SQL and core.logic Killed my ORM: Craig Brozefsky

    • uses clojure for analysis engine looking for possible malware actions
    • core.logic engine takes observations and creates IOCs (indications of compromise) + html
    • observations: wrapper around core.logic's defrel
    • IOCs: severity + confidence, explanation, suggested remediation
    • the reasoned schemer: handed out to all their analysts to explain logic programming to them so they can use the system

    Macros: Why, When and How: Gary Fredericks

    • macro: special function that takes form as argument and returns a form
    • run at compile time
    • can always be replaced by its expansion
    • when writing macros, helps to know what you want it to expand to
    • use macroexpand-1 to find out when it's going to return
    • cannot pass macro to higher-order function (not composable at runtime)
    • macros can make code harder to read; person reading code has to be familiar with macro expansion to really know what your code is doing
    • tolerated usage: defining things, wrapping code execution, delaying execution, capturing code, DSLs, compile-time optimizations (hiccup produces as much html as possible at compile time)
    • avoiding macros: get more familiar with higher-order function usage and paradigms
    • writing tolerable macros: use helper functions, naming conventions, no side effects
    • syntax-quote (backtick): like quote on steroids, gives you multiple benefits when used in a macro
    → 9:45 AM, Mar 20
  • ClojureWest 2013: Day One Notes

    Domain Driven Design in Clojure: Amit Rathore

    • read the book from eric evans
    • Lot of oop design principles carry over
    • shoot for 3-4 lines of clojure code per function
    • validateur, bouncer, clj-schema (validation)
    • if code confusing, demand simplification
    • make temp namespaces explicit: zolo.homeless
    • domain: business-important logic, not the API, not services, not validation, not talking to the db, just the stuff business people care about; should be pure
    • if you don't need it now, don't build it

    RESTful Clojure: Siva Jagadeesan

    • liberator, bishop: libraries to help build proper REST APIs in clojure
    • use the status codes: 1xx - Metadata, 2xx - success, 3xx - redirect, 4xx - client error, 5xx - server error
    • 405: method not allowed
    • 409: conflict
    • 404: resource not present
    • create returns Location header with location of new resource, in addition to the 201 (created) status code
    • even better: also return a set of links to related resource (rel = self) and transitions (rel = cancel)
    • allows client to be loosely coupled from API
    • client doesn't need to know how resources move through the system (transition logic)
    • REST means using multiple URIs, HTTP status codes, and Hypermedia

    Clojure in the Large: Stuart Sierra

    • def'ing refs and atoms at the top level is basically global mutable state via singletons, please avoid
    • recommend using constructor functions to *return* the state variables you want to use, then pass that state along to each function
    • easier to test
    • explicit dependencies
    • safe to reload when working at the repl
    • thread-bound state also bad: assumes no lazy sequence returned in function bodies, hides dependencies, and limits caller to using one resource at a time
    • prefer passing context around to functions
    • can pull resources out of it
    • use namespace-qualified keys for isolation
    • isn't confined to a single thread
    • still need to cleanup at the end
    • more bookkeeping
    • true constants are fine as global vars (^:const)

    Pedestal: Architecture and Services: Tim Ewald

    • alpha release from relevance of open-source libs
    • use clojure end-to-end to build RIAs
    • demo: hammock cafe: clojurescript apps communicating to same back-end using datomic store
    • 2 halves: pedestal-service, pedestal-app
    • ring limits: bound to a single thread's stack
    • interceptors: map of functions, has enter and leave for processing requests and responses
    • can pause and resume along any thread
    • pushed to be as ring-compatible as possible
    • use of long polling and server-side events (requests that come in slow and last a long time, get updated as more data comes in)

    Design, Composition, and Performance: Rich Hickey

    • take things apart
    • design like bartok (embrace constraints, use harmonic sense)
    • code like coltrane (constant practice, keep harmonic sense)
    • build libraries like instruments (design for players, able to be combined with other things)
    • pursue harmony
    → 9:16 AM, Mar 19
  • How To: Sync Last Read Page Between Nook for Android and iPad

    Both Amazon and Barnes and Noble claim their ereaders will sync where you are in your books between the different devices (iPad, Android phone, PC or Mac) you might be reading on. Amazon’s works in the background, without a hitch, but Barnes and Noble’s (Nook) takes some finesse to get working.

    I nearly pulled my hair out in frustration trying to figure out why the last read page in my books wasn’t syncing between the Nook software on my iPad and the Nook software on my Android phone. I looked in the Settings for both devices, trying to find something that said “sync” and might have been turned off, but no luck.

    Here’s how to get them to sync:

    1. Whenever you finish reading on one device, instead of just closing out the software (or putting your device to sleep), go back to your "Home" or "Library".
    2. Look in the upper-right-hand corner of your Library screen. You should see a pair of curved arrows. Touch those arrows to force your device to upload its information (last read page, new bookmarks, etc) to B&N's servers.
    3. When you open your second device, make sure you start out on your Home/Library screen, and hit the same arrows. This will force the device to pull the latest information from B&N's servers (including the last read page you just uploaded).
    4. Open the book on your second device. It should jump to the last read page from the first device.
    That's it! Hope this helps, and let me know if you encounter any other weirdness while using the Nook ereader software.
    → 8:57 PM, Dec 15
  • Short Review of the Pangolin Performance

    Ever since I gave away my trusty Macbook, I’ve been pining for a new computer. The iPad purchase helped, but let’s face it: I’m not going to be able to do any programming on that thing.

    I knew I was going to install Linux on whatever I bought, so I thought I’d cut out the middle man and buy one direct from System76. They’ve been making laptops and desktops with Ubuntu pre-installed for a few years now, and their reviews have been positive. So I swallowed my trepidation at buying a laptop without test-driving it in the store first and ordered one of their Pangolin Performance machines.

    After two weeks, here’s what I think (in brief):

    What’s Good

    • Ubuntu: I didn't have to do any configuration out of the box, or hunt down any extra drivers. Nice.
    • Speed: Ye gods, is this thing fast. Transferring my entire music collection (a good 20 GB) from my backup drive took just 15 minutes. It's good to know the i5 processor + solid state drive were worth it.
    What's Bad
    • Keyboard: In a word, terrible. It's shifted to the left of the center of the screen, so I'm always twisted in my seat if I'm trying to type on it. The keys are more widely spaced than is comfortable, making my fingers work more to type anything. The spacebar also refuses to recognize most key-presses. Altogether it feels cheaply made, and is really frustrating to type on.
    • Customer Service: Not so helpful. I emailed System76 support about my spacebar problem. Their solution? "Press it closer to the center" Well, thanks, guys, but if I can only get a space by hitting a single spot on the space*bar*, it's not much use to me, is it?
    • General Build: Cheap. The "Ubuntu key" is just a regular key with a tacky sticker on it (placed off-center, no less). The laptop won't turn on unless you hit the power button at the exact right angle for the exact right amount of time. The whole thing feels shoddy.
    • Wifi: Maddeningly drops connections at random. Had to connect the laptop to ethernet to get reliable internet access. Sort of defeats the purpose of having a laptop, IMHO.
    It's a very frustrating laptop. It runs Ubuntu well, it's screaming fast, and it didn't cost me an arm and a leg. Unfortunately, it's impossible to type on for longer than a few minutes without making me want to throw it across the room, and it's useless without an ethernet connection.

    I’m close to returning it and just buying a Macbook. It might cost more, but at least I’ll know it’s solidly built.

    Update 12-15-2010: Finally convinced system76 to let me ship the computer to them for a keyboard replacement. Hopefully this’ll fix the spacebar problem and make the laptop worth keeping.

    Update 12-27-2010: They fixed the keyboard! Got the laptop back just before Christmas, and the spacebar works normally (as does the rest of the keyboard). Definitely keeping the laptop now. Thanks to system76 for coming through with hardware support.

    → 6:36 PM, Oct 23
  • iPad: 30 days in

    30 days ago, I broke down and bought an iPad.

    I know, I’ve ranted and raved against Apple’s unfair closed-source system. And yes, I hated the shopping experience my first time out.

    But lugging around my netbook + my Nook so I could read books and check email and read pdfs and read The Economist online started bugging me. Doubly so when I realized there were some ebooks I could only get from the Kindle store (damn you, DRM!), but neither my Nook nor my netbook could read Kindle books (Linux, it so happens, is the one platform Amazon’s Kindle software doesn’t run on). So I had to read Kindle books on my phone (Android), Barnes and Noble books on my Nook, and pdfs (Linux Journal - why do you guys cling to the old pdf format?) on my netbook.

    Needless to say, this drove me batshit after a while.

    So I trudged down to the Apple store to try out the iPad again.

    And guess what? My experience was much better this time. There was plenty of software installed on the demo units for me to try out. The staff answered all my questions, and actually seemed to care about selling me one.

    I picked one up, and after 30 days I don’t miss the Nook or the netbook at all.

    I can read all my books, from every store. I can push my pdfs into DropBox and read them with GoodReader. I’ve even started jotting down notes for a few short stories in Elements, again using DropBox to sync up the files between my Nook and my desktop.

    Oh, and I’ve also picked up a few games. Plants vs Zombies HD is almost worth the price of admission on its own.

    Can I program on it? Nope. Is it annoying that iTunes can’t play my .ogg files? Hell, yes.

    But you’ll have to pry it from my cold, dead hands before I’ll give it up.

    → 7:22 PM, Oct 14
  • Apple Kills Lala.com

    A few months ago, when Apple bought Lala.com, I didn’t panic. I thought Apple’d integrate Lala.com’s store into iTunes, rebrand the site, and keep things rolling.

    I was wrong:

    Dear Ron T.,

    The Lala service will be shut down on May 31st.

    In appreciation of your support over the last five years, you will receive a credit in the amount of your Lala web song purchases for use on Apple’s iTunes Store. If you purchased and downloaded mp3 songs from Lala, those songs will continue to play as part of your local music library.

    That’s part of the message I got in my inbox this morning. Apple’s shutting Lala.com down, and offering me “credit” in the iTunes store as compensation.

    Fuck you, Apple. I bought music from Lala.com because I didn’t want to buy it from your store. I had a choice, and I exercised it. But you just can’t understand that, can you?

    I will never buy anything from the iTunes store. I will never throw my money away on songs in your proprietary format.

    Lala.com’s ease of use, free access to every song in the catalog, cheap prices and web-based portability were far superior to the iTunes store. You couldn’t compete with that, could you, Apple? So you bought it and killed it.

    Fuck you.

    → 8:02 AM, Apr 30
  • The iPad Experience: Best Buy Edition

    Me: Hey, iPad! Mind if I install this cool KoboBooks app I’ve heard so much about?

    iPad: Sure thing! Just gimme your iTunes password.

    Me: I don’t have an iTunes account.

    iPad: No account, no software.

    Me: Ok…Let’s just fire up mail.google.com, so I can check my email.

    iPad: Nope.

    Me: No?! I can’t check my email?

    iPad: Sure thing! Just enter your .Mac password…

    Me: Screw this. I’m buying a netbook.

    → 2:47 PM, Apr 6
  • Apple's Garden is Walled, with Locked Gates

    I’ve just been reminded why I left Mac for Linux.

    Me (to store rep): “Can I install and play with some apps on this demo iPad so I can decide if I want to buy one?”

    Rep: “Nope. Use just what’s installed.”

    …and that’s when I left. If I can’t be allowed into the Walled Garden of Apple long enough to decide if I want to spend some money there, I’ll stick with my open-source.

    I’m off to Best Buy to look at a Linux-compatible netbook.

    → 1:20 PM, Apr 6
  • UNR on HP Mini 110

    I’ve been thinking about trying out Ubuntu Netbook Remix, the version of Ubuntu Linux made especially for netbooks like my HP MIni 110, for a while now. I was attracted to the idea of being able to run a real Linux distro on the netbook, as opposed to the tightly-controlled version that came on the Mini. HP’s version of Ubuntu–Mie–isn’t bad, so much as completely un-customizable: you can’t remove the screen-hogging front panels from the desktop, for instance, which left me staring a large blank space where Email was supposed to appear (I used Gmail, so a desktop-bound email program is useless to me).

    So this week I finally bit the bullet, wiped the harddrive, and installed the latest version of UNR.

    Thus far, things have gone well. I had some problems with wifi at first, but running the software updater and rebooting fixed that problem. I’ve been able to download and install Wine, which lets me use the Windows version of eReader for reading my ebooks. I’ve re-arranged the icons in the menus, ripped out some software I didn’t need, and in general had a good time customizing the hell out of the OS.

    I feel like I’ve been given a new computer, one that’s more fun to use and easier to bend to my will. In the end, that’s always been the appeal of Linux to me: it puts power back in the hands of users, where it belongs.

    → 1:34 PM, Dec 31
  • eReader for Android!

    They just released a version of the eReader software (formerly Palm eReader, then just eReader, now the Barnes and Noble eReader) for the Android platform.

    It’s a little bit buggy: you need to wait for an entire book’s table of contents to load before reading/scrolling, else the book will get stuck partway through. Other than that, it works great on my G1. Nice to see a commercial ereader on a Linux platform. (Yes, the books still have DRM, but the format’s got some longevity behind it, and is supported on enough devices that I’m not worried about getting locked into one platform).

    → 8:39 PM, Nov 29
  • HowTo: Get Filezilla working after FTP connection drop

    If you’re using Filezilla to connect to a shared hosting account that uses cPanel, and your internet connection gets dropped during a transfer, you’re going to get an error from Filezilla when you try to reconnect after getting your internet up again.

    Here’s how to clear that error:

    1. Login to your shared hosting account.

    2. In cPanel, go to “FTP Session Manager”

    3. You should see a long list of “IDLE” connections. Delete each of them, one by one, using the “X” button beside each entry.

    4. Logout of cPanel when all the ftp sessions have been deleted.

    …and that’s it! You should be able to connect normally to your account through ftp again.

    → 12:28 PM, Apr 13
  • Nokia N810: 3 Months In

    I've been using my Nokia N810 for a few months now. Though enough time has gone by for the initial "Shiny!" love to wane, I still carry my Nokia around with me everywhere I go.

    Why? Because it's fulfilled everything I wanted it to do:

    1. Replace my forest of small notebooks: Check. I've gotten much better at typing on the built-in keyboard, and still enjoy swapping back and forth between using it and trusting the handwriting recognition to make sense of my scribbles. It's great to be able to jot down a story idea, then come back and fill it in later, all on the same machine. Oh, and I can carry around a single list of all the books I want to buy/read, so when I'm in the bookstore I don't have to worry about remembering the title of "that great book I read a review about six months ago."
    2. Serve as my primary ebook reader: Easily. FBReader is an awesome piece of software (for reading non-DRM'd books), and the Garnet VM lets me run my Palm-OS Mobipocket Reader in full-screen mode. Now if only I could buy more books in electronic form!
    3. Replace my iPod: Check and double-check. That's right, I gave up my Apple Shiny for Linux. I buy my songs on Amazon's MP3 store, download them to the 8GB card I've got in my Nokia, and use Canola to play everything back. Canola's got a nice user interface, and even recognizes the .ogg files I've ripped from CDs!
    4. Read (and respond) to email away from home: Check. I can not only look at GMail in regular (non-mobile) mode, but revise my Google Docs and respond to any forum discussions I'm involved in. The Nokia's browser renders web pages flawlessly, and connects to local WiFi hotspots without trouble. I only need to take my notebook if I'm doing some programming. Otherwise, the laptop stays at home.
    → 4:54 PM, Aug 4
  • Sticky Notes for your Browser: Posti.ca

    If you've ever used the post-it notes in OS X's Dashboard app, you know how handy those little buggers can be.

    But (as far as I know), you can't share those notes from computer to computer, and if you use a non-Mac OS, you can't use them at all.

    Thankfully, Posti.ca has put together a web service that gives you the look and feel of Apple's Notes Widget but runs in your browser. Since creating an account is free, you can now sign up, create notes, and view them from any computer.

    I've been using it to jot down ideas I have while at work that I need to follow up on at home. The service seems stable so far, though there are a few bugs they've yet to work out.

    → 3:34 PM, Aug 1
  • VMWare Fusion Beta 2

    VMWare Fusion is an application for the Mac that lets you run Windows as a virtual machine. No need to reboot into Windows to play games, you can just startup Fusion and run any Windows app right from your comfy Mac OS desktop.

    They've launched a new public beta (version 2), which you can download and try out for free. It's beta software, so it's not for critical stuff, but should work fine if you're curious about the software.

    I gave it a go on my work machine. Installation was easy, and it automatically detected the Boot Camp partition I've been using to run Windows with. Starting the Windows machine was easy, and it felt prety responsive.

    ...Until I tried to run a game, that is. I started up Fable, a not-too-recent game with low graphics requirements. After a few screen hiccups, the game started, but Fusion warned me that "some shaders are not supported, and some elements may not display."

    I clicked past the warning and started a game anyway. Lo and behold the missing shaders were: me! The main character displayed as just a pair of eyeballs floating in space. Really creepy, and kind of a deal-breaker for me.

    So I shut it down. Or rather, tried to shut it down, but had to force quit and reboot my Mac to regain control of the machine.

    It didn't perform well for me, but I'd still suggest giving the free beta a shot. Who knows? You may find it does everything *you* need it to do, and be worth picking up a copy of Fusion 2.0 when it comes out.

    → 4:28 PM, Jul 31
  • I'm a-Twitter!

    I’ve hopped on yet another Web 2.0 bandwagon by joining Twitter.

    My username’s mindbat (of course). Join up, and let’s Follow each other about all day (it’s not as creepy as it sounds, I promise).

    → 3:00 PM, May 16
  • Scary Reading: Adobe's AIR EULA

    I was all set to install an application built using Adobe's AIR platform when I took a minute to actually read the End-User License Agreement. What I read made me cancel the install, rather than agree to the EULA.

    What was so bad? Well, for a development platform that's supposed to let users run web apps from their desktop, regardless of their operating system, AIR can apparently only be used once:

    2.1 General Use. Subject to the terms of this agreement, including the important restrictions in Section 3, you may install and use a copy of the Software on one compatible computer. The Software may not be shared, installed or used concurrently on different computers

    Did you catch that? They'll let you install it once, on one computer, and that's it. How useful is that? I migrate between a Mac computer at work and a Linux computer at home; this EULA means I can only have use AIR programs on one or the other, but not both.

    As if that weren't crazy enough, Adobe still has the balls to claim they offer AIR with no warranty and no guarantees. So not only do they restrict where I can install and use their "free" software, but they also won't take responsibility for any damage it causes.

    I was excited when I first heard about Adobe AIR. No more. AIR's EULA places Adobe firmly in the doesn't-care-about-user-freedom-at-all camp, and that's a camp I've left behind.

    → 10:44 AM, May 16
  • Nokia N810: First Impressions

    This is what my Palm TX should have been.

    Granted, the Palm was a good organizer and datebook. But in the 21st Century, who cares about an electronic datebook that can't load Facebook or YouTube properly? I don't need a calendar application, I need to be able to get to my Google Calendar.

    I do need to be able to watch videos without a lot of hassle. I do need to be able to read pdfs I've just downloaded. And I do need better text input methods than Graffiti.

    The Nokia N810 delivers all this and more.

    Here's how:

    1) Web browsing: The Nokia's Mozilla-based browser renders web pages correctly, with all their CSS and Flash intact. And on the tablet's 800x600 display, they look great.

    2) Ebooks: The Nokia's built-in pdf reader loads pdfs straight from the web and displays them perfectly. I don't need to have them converted first, like I did with the Palm.

    3) Handwriting recognition: The Nokia gives me true handwriting recognition right out of the box. Instead of being stuck learning someone else's shorthand (Graffiti), I can train the tablet to recognize my own personal chicken-scratch. That makes it the first device I could see replacing my many notebooks.

    → 6:18 PM, May 5
  • Shiny!

    I’m writing this on my brand-new Nokia N810 Internet Tablet!

    It’s got trainable handwriting recognition, a fully-featured web browser, and Linux under the hood.

    Goodbye, Palm. Hello, Open Source!

    → 4:04 PM, May 1
  • Turning Robo-Japanese

    Once again, the Japanese prove Charles Stross' assertion that “they got our future, dammit.”

    According to io9, Japan is set to become the first country with a robotics ministry, and has a plan to roll out robot labor in areas like janitorial services and caring for the elderly by 2010.

    From the article:

    You already see humanoid robots in Japan attending religious ceremonies, making sushi, planting rice, answering phones in corporate offices, subbing in as dance partners, and feeding old people whose motor skills are starting to fail. Animal bots have been making a big breakthrough too—from the digital Tamagochi to Paro the furry therapeutic seal, Japanese people are experts at satiating their need for companionship or assistance via low-maintenance mechanical friends. Monikers like Robot Kingdom and Robot Nation, which have been used to describe Japan since the 80s, are relevant now more than ever—with a shrinking labor force, declining birth rate, and an aging population, the demand for robotic help in hospitals, nursing homes, offices, and retail spaces is sky high. Researchers in Japan are confident that, in a few years time, humans and robots will coexist happily in a fully integrated man-machine society.
    Meanwhile, our robotics industry pursues more practical goals, like killing people from the air.
    → 10:42 AM, Apr 4
  • Happy April Fools Day!

    As one of the many beta testers on this new program, I highly recommend that everyone take advantage of Gmail’s new Custom Time feature.

    → 7:58 AM, Apr 1
  • Why DRM is worth fighting over

    Much to my chagrin, I’m going to have to respectfully disagree with Neil Gaiman over the Kindle’s DRM.

    DRM isn’t about lending books out (though that would be nice to still do), and it isn’t about trying to get stuff for free. I don’t have any problems paying for music and books (one look at my credit card statements would prove that to anyone), and I don’t want to broadcast copyrighted works over the Internet.

    What I do want to do is to be able to move my music and books from place to place and from device to device.

    I’ve already posted on how the DRM in iTunes means I’ve lost more than 3 Gigs of music just because I decided to stop using OS X. Now, if I were to get a Kindle and buy several books that I later want to move to a different device (say, a Sony Reader released in a year or two that’s much, much better), I’ll be screwed. If the books are DRM’d, they’ll be attached to just one device: the Kindle. If I want to move them, I’ll have to buy another copy. If that new copy is also DRM’d, I’ll have to buy another copy if I ever want to move it to a different ebook reader.

    I’m already dealing with this with the Adobe eBooks I bought before switching to Ubuntu. Because the books are DRM’d, I can read them on my Palm–which supports the DRM–but not on my computer, or any other electronic device I own. The PDA I wanted to buy–a Nokia N800 internet tablet–I didn’t, because it wouldn’t be able to read any of my DRM’d ebooks.

    So if you don’t mind being locked into one product, one company’s way of doing things, forever, then DRM is not a problem. But if you ever want to exercise your buyer’s right to choose–a key component of a free market–DRM will bite you in the ass.

    → 4:00 PM, Nov 27
  • Another Switch

    As part of the switch from OS X to Ubuntu, I’ve lost iTunes, and thus can’t listen to all the DRM-infected music I bought from the iTunes store.

    Since I bought more than 3 Gigs of music from Apple, that really p*ssed me off. But rather than strip the DRM from the music (which is now illegal–thanks to the U.S. Congress), or try to run iTunes via Wine, I’ve simply switched to a different music service: eMusic.

    eMusic runs on subscriptions, meaning for $10 a month I can download 30 DRM-free MP3 files from their library. That’s one-third of the price of songs on iTunes, and once I’ve downloaded the music I can burn ‘em to disk, move them from computer to computer, or do anything else (non-commercial) I feel like. It’s a simple, straightforward, consumer-friendly (and legal) way to download music.

    Why doesn’t iTunes work like this?

    → 6:56 PM, Nov 23
  • Now With 50% More Open-Source

    Made the leap today: I wiped OS X off the hard drive on my MacBook and put a fresh, clean install of the newest version of Ubuntu (7.10, or ‘Gutsy Gibbon’) on it.

    Everything worked out of the box save for wifi, and there’s a workaround for that. 7.10 looks great, the MacBook feels more responsive, and getting/installing software is easier than in OS X. No Leopard for me. I’ll stick with the penguin.

    → 9:43 PM, Nov 9
  • Freebird

    I’ve switched. Not from PC to Mac, but from Mac to Linux.

    That’s right. I’m a card-carrying geek.

    But I’ve switched to the friendliest Linux flavor I could find: Ubuntu. It’s free, it’s open source, and it’s MINE. Mine, because everyone that uses Ubuntu owns Ubuntu. Everything is customizable, from the Desktop background down to the very kernel the operating system runs on. I can even use entirely different desktop environments if I want, kind of like being able to run a Mac OS 9-style interface on Mac OS X’s code. Unlike a Mac, I can change the code anyway I want, and no one’ll come after me with a lawyer.

    In fact, if I make it better, they’ll thank me. That’s how I get tech support now: from other users, who have tweaked and poked and written code and twisted things to work just how they want, then published how-to’s online. Check out the Ubuntu Forums for some examples.

    So yeah, I’ve gotta use a command-line interface a little more now than before. And yeah, I’ve gotta spend time testing and tweaking some things to get them to work. But I can get them work, and I didn’t have to pay a dime for them.

    Oh, and did I mention I can get Windows games to run, in Linux, without booting up Windows? Check out the Wine project. That’s the power of open source, folks.

    → 8:13 AM, Oct 18
  • RSS
  • JSON Feed
  • Surprise me!