> A key performance metric of computers is their energy dissipation. One contribution to dissipation is friction at the rotary joints in each logic gate. Due to the joint’s small frictional drag, mechanical computers constructed from them can, in principle, dissipate orders of magnitude less power than conventional semiconductor computers, while still operating at relatively high speeds.
This claim seems dubious, perhaps someone has more expertise to comment. The justification
> Operating this lock involves rotation at the joints by up to ∆θ≈1 rad. The model system analyzed in  is an excerpt of the links and joints shown in the closeup on the right of Figure 24. From [11, Eq. 2], this rotation dissipates bout 2.4×10−27J per rotary joint when operating at f = 100 MHz.
Seems akin to saying "We operate our microchip at 1 microvolt / 1 pico ampere at f=100MHz, giving 10-26J per operation." -- which seems like a silly aspiration without carefully analyzing noise and quantum mechanical constraints. (without which it would seem almost any computing device could operate at arbitrarily low power).
Did no one else notice the name of the lead author? Ralph C. Merkle  is a rather well known computer scientist, involved in public-key cryptography, hashing algorithms, and inventor of the Merkle Tree  that is at the core of the blockchain.
It somewhat reminds me of https://en.wikipedia.org/wiki/Z1_(computer) although this mechanism is definitely superior in terms of friction. I guess the majority of research into mechanical computers stopped shortly after electrical/relay ones started becoming more interesting, which is why there's definitely better mechanical designs possible.
In fact, people probably use a lot of mechanical devices which can serve as logic gates without knowing it --- although everyone focuses on electronic digital computers, the mechanisms on which computers can be built are surprisingly vast and simple.
Also, the PDF is rather bloaty for the content, because all the figures are extremely high-resolution bitmap images instead of vector graphics, despite looking like they were created with a vector graphics program.
this is intriguing. since the links and rotary joints will be susceptible to friction I wonder if you could pump oil through the machine to remove any shavings that might be created.
I wonder if the fact that the gates etc. are in a single plane could be used to make it function more effectively as well.
It certainly is using a different mechanical approach than the Differential Machine.
Heh, one step closer to a real self-replicating rep-rap :) Now to implement a PID controller..
If people like reading outlandish papers like this, see the full queue.
Billiard Ball Computer is just made out of billiard balls :).
It's turing complete. I'm wondering if the mechanism described in paper does something above and beyond...
Technical question: Could someone explain why, in the shift register, the output lock is needed at all?
Couldn't you simply connect the output of each holding lock to the respective input of the next cell and get the same results?
It looks like you could really easily prototype and make toy projects with this system using a peg board, and maybe a half dozen parts: pins, bell cranks, the locks, spacers/washers, joints and links a variety of lengths.
Isn't this how the technology was described in Stephenson's book "The Diamond Age" in which computing was done with nanoscale Babbage like machines?
At large scale, I wonder what kind of horsepower would be required to run a "useful" implementation made at a scale that an normal workshop could produce?
i want to build a few these at 3d print level, but would love to see someone prototype some of these at micro level, anyone have access to lithograph? Another basic question is potential clock speed based on size and material used. curious if anyone has done some basic ballpark theoreticals
Can't someone make a simulation in algodoo or something? That'd be useful for a proper illustration of how it works.