Formerly VP of Engineering at Blekko Inc (search engine) blekko.com, formerly part of the Watson group at IBM.
Influences: Sun, Startups, NetApp, Google Interests: Systems, Distributed Systems, really hairy convoluted inter-dependent systems. Hobbies: Robots, Embedded control, Music, Automata.
Feel free to contact me at my gmail address: chuck.mcmanis@gmail.com
Important: If you are looking at this profile because you thought something I posted was Wrong, dismissive, disrespectful, douchebaggish, what have you, I would really like to hear from you so that I could understand how you got that impression. I won't get mad, I won't "retaliate" I just want to hear what you have to say. It is important to me to communicate clearly and if something I said struck you that way then I failed and I would like to correct it.
Note: Any comments on this site attributed to me are my opinions and not those of any current, past, or future employer. Consider yourself so notified.
- I like this, back when the xterm CVE was common you could probably 0wn any botter who was looking at their logs in xterm.
- On social media I've been accused of being AI twice now :-). I suspect it is a vocabulary thing but still it is always amusing.
- There is a lot to be said for that perspective. I wonder if any PMs have considered making the bed of the truck a FRU that you can swap out at home.
- https://archive.ph/k2S9O for those who have read their last free article.
Interesting that Rivian seems to be doing fine in this space.
- Question, isn't this a bug? static enum hrtimer_restart perf_swevent_hrtimer(struct hrtimer *hrtimer) { - if (event->state != PERF_EVENT_STATE_ACTIVE) + if (event->state != PERF_EVENT_STATE_ACTIVE || + event->hw.state & PERF_HES_STOPPED) return HRTIMER_NORESTART;
The bug being that the precedence of || is higher than the precedence of != ? Consider writing it if ((event->state != PERF_EVENT_STATE_ACTIVE) || (event->hw_state & PERF_HES_STOPPED))
This coming from a person who has too many scars from not parenthesizing my expressions in conditionals to ensure they work the way I meant them to work.
- Thanks, one of the things I bought on a campaign was a programmable USB hub from 'Capable Robotics Components' and when I thought, this thing is really useful I need another one, they became impossible to get.
[1] https://www.crowdsupply.com/capable-robot-components/program...
- Interesting that this was flagged (I do wonder if that is reflexive or not). The main argument that context is essential to precision (in current architectures) is pretty solid.
- Once again I am reminded that "knowing" which accounts are fake is a knowable thing and yet social media companies don't mitigate them "because money" or "because DAU" Etc. When I was running operations at Blekko (a search engine) we were busily identifying all the bots that were attempting ad fraud or scouring the web for vulnerabilities or PII to update "people" data bases. And we just mitigated them[1], even though it meant that from a 'traffic' perspective we were blocking probably 3 - 4 million searches / day.
[1] My favorite mitigation was a machine that accepted the TCP connection from a bot address and just never responded after that (except to keep alives) I think the longest client we had hung that way had been waiting for over 3 months for a web page that never arrived. :-)
- Excellent write up! This answered some of my questions that I wasn't really comfortable asking creators. A very (and I mean very) long time ago when people were complaining about the price of software I wrote up a Usenet post on why software costs what it does and enlightened a number of people to the fact that yeah, there are many things you don't think about when you're just looking at the list price.
Presumably the contract doesn't allow you to sell product directly if you wanted too. The other thing I'm curious about is that CrowdSupply does continue to list "buying options" long after the creator has gone away. Which makes me wonder if they have some sort of rights to tooling etc post campaign?
- I backed this project: https://www.crowdsupply.com/modos-tech/modos-paper-monitor on Crowd Supply to see how close they can come to a "monitor" experience with an e-paper display.
- While I think that's a bit harsh :-) the sentiment of "if you have these problems, perhaps you don't understand systems architecture" is kind of spot on. I have heard people scoff at a bunch of "dead legacy code" in the Windows APIs (as an example) without understanding the challenge of moving millions of machines, each at different places in the evolution timeline, through to the next step in the timeline.
To use an example from the article, there was this statement: "The split to separate repos allowed us to isolate the destination test suites easily. This isolation allowed the development team to move quickly when maintaining destinations."
This is architecture bleed through. The format produced by Twilio "should" be the canonical form, which is submitted the adapter which mangles it to the "destination" form. Great, that transformation is expressible semantically in a language that takes the canonical form and spits out the special form. Changes to the transformation expression should not "bleed through" to other destinations, and changes to the canonical form should be backwards compatible to prevent bleed through of changes in the source from impacting the destination. At all times, if something worked before, it should continue to work without touching it because the architecture boundaries are robust.
Being able to work with a team that understood this was common "in the old days" when people were working on an operating system. The operating system would evolve (new features, new devices, new capabilities) but because there was a moat between the OS and applications, people understood that they had to architect things so that the OS changes would not cause applications that currently worked to stop working.
I don't judge Twilio for not doing robust architecture, I was astonished when I went to work of Google how lazy everyone got when the entire system is under their control (like there are no third party apps running in the fleet). The was a persistent theme of some bright person "deciding" to completely change some interface and Wham! every other group at Google had to stop what they were doing and move their code to the new thing. There was a particularly poor 'mandate' on a new version of their RPC while I was there. As Twilio notes, that can make things untenable.
- I think you did a great job of bringing fairly nuanced problems into perspective for a lot of people who take their interactions with their phone/computer/tablet for granted. That is a great skill!
I think an fertile area for investigation would also be 'task specific' interactions. In XDE[1], the thing that got Steve Jobs all excited, the interaction models are different if you're writing code, debugging code, or running an application. There are key things that always work the same way (cut/paste for example) but other things that change based on context.
And echoing some of the sentiment I've read here as well, consistency is a bigger win for the end user than form. By that I mean even a crappy UX is okay if it is consistent in how its crappy. Heard a great talk about Nintendo's design of the 'Mario world' games and how the secret sauce was that Mario physics are consistent, so as a game player if you knew how to use the game mechanics to do one thing, you can guess how to use them to do another thing you've not yet done. Similarly with UX, if the mechanics are consistent then they give you a stepping off point for doing a new thing you haven't done but using mechanics you are already familiar with.
[1] Xerox Development Environment -- This was the environment everyone at Xerox Business Systems used when working on the Xerox Star desktop publishing workstation.
- Same way I feel about 'Military Intelligence' :-). Both of those phrases use the 'information gathering and analysis' definition of intelligence rather than the 'thinking' definition.
- Why do people call is "Artificial Intelligence" when it could be called "Statistical Model for Choosing Data"?
"Intelligence" implies "thinking" for most people, just as "Learning" in machine learning implies "understanding" for most people. The algorithms created neither 'think' nor 'understand' and until you understand that, it may be difficult to accurately judge the value of the results produced by these systems.
- I miss Gassée's the Monday Note, it seems he hasn't published one since 2023.
- I was just in a discussion on this very topic. It's the build vs buy equation applied to silicon. Early in the tech boom the entire silicon stack was proprietary and required a lot of time and investment to train up people who could design the circuitry, we got our first "ASICS" which was basically a bunch of circuitry on a die and you then added your own metal layer so it was like having a bunch of components glued to a board and you could "customize" it by putting wires between the parts. Then we had fabs that needed more wafer starts so they started doing other peoples designs which required they standardize their cells and provide integration services (you brought a design and they mapped it to their standard cells and process). And as the density kept going up they kept having loots of free space they needed to fill up. The 'fabless' chip companies continued to invest in making new parts until the pipeline was pretty smooth. And at that point the level of training you needed a the origin to get it into silicon dropped to nearly zero, you just needed the designs. And into that space people who were neither 'chip' companies, nor were they 'fabless' OEMs, realized they could get their integration needs met by asking a company to make them a chip that did exactly what they wanted.
One the business side, the economics are fabulous, your competitors can't "clone" your product if they don't have your special sauce components. So in many ways it becomes a strategic advantage to maintaining your market position.
But all of that because the all up cost to go from specification to parts meeting the specification dropped into the range where you could build special parts and still price at the market for your finished product.
A really interesting illustration is to look at disk drive controller boards from the Shugart Associates ST-506 (5MB) drive, to Seagate's current offerings. It is illustrative because disk drives are a product that has been ruthlessly economized because of low margins. The ST-506 is all TTL logic and standard analog parts, and yet current products have semiconductor parts that are made exactly to Seagate's design specs and aren't sold to anyone else.
So to answer your question; apparently the economics work out. The costs associated with designing, testing, and packaging your own silicon appears to be cost effective even on products with exceptionally tight margins, it is likely a clear winner on a product that enjoys the margins that electric vehicles offer.
- This was perhaps my favorite part of Physics 390 ("modern physics") which was about quantum dynamics and relativity. The speed of light is defined in terms of a velocity (~300,000,000 m/s) but if you were traveling at the speed of light time stops (which keeps the rule that its constant in all frames of reference). That and time passes more quickly at higher altitudes and these days we can actually measure that. Wild stuff.
- It is absolutely a different company. I had the opportunity to intern there twice in the late 70's and then was acquired by them in 2015, the IBM of 1978 and the IBM of 2015 were very different businesses. Having "grown up" so to speak in the Bay Area tech company ecosystem where companies usually died when their founding team stopped caring, IBM was a company that had decided, as an institution, to encapsulate what it took to survive in the company's DNA. I had a lot of great discussions (still do!) with our integration executive (that is the person who is responsible for integrating the acquisition with the larger company) about IBM's practice in terms of change.
- No. They are a multi-generational institution at this point and they are constantly evolving. If you work there it definitely FEELS like they are dying because the thing you spent the last 10 years of your career on is going away and was once heralded as the "next big thing." That said, IBM fascinated me when I was acquired by them because it is like a living organism. Hard to kill, fully enmeshed in both the business and political fabric of things and so ultimately able to sustain market shifts.
How are we defining "learning" here? The example I like to use is that a student who "learns" what a square root is, can calculate the square root of a number on a simple 4 function calculator (x, ÷, +, -) if iteratively. Whereas the student who "learns" that the √ key gives them the square root, is "stuck" when presented with a 4 function calculator. So did they 'learn' faster when the "genie" surfaced a key that gave them the answer? Or did they just become more dependent on the "genie" to do the work required of them?