They have created platforms of unimaginable complexity. But if they’re not sick to their stomach about what has happened in Myanmar or overwhelmed by guilt about how their platforms were used by Russian intelligence to subvert their own country’s democracy, or sickened by their own role in what happened in New Zealand, they’re not fit to hold these jobs or wield this unimaginable power.
The argument that they are just a common carrier not responsible for the content does not fly. They are wittingly allowing propaganda, agitprop and other unwelcome content to be disseminated around the world, and benefiting by getting advertisers to pay to be associated with the content. The common carrier excuse worked in for some industries, where the carrier was transmitting content from known providers, but now that the carrier is enabling the publishing and broadcast from any random internet connection, facebook is now the publisher and the carrier. Hence it is responsible for the content.
If it cannot make a valid business model out of properly curated content, then too bad, it does not deserve to exist. There are many business models that society does not allow to exist, and publishing/promoting propaganda, agitprop and dubious content is one of the business models that needs to be controlled by society.
Twitter probably falls into the same category with the way the platform promotes extreme political rhetoric. There is reasonable evidence that multiple elections around the world have been adversely affected by the various social media platforms over the past several years, so before outside influences create more havoc, societies around the world need to come up with a strategy to deal with social media companies that profit from spreading information designed to decrease the quality of life for everyone.
Dave Snowden is writing up the Definitive History of the Cynefin Framework, so I thought it was time to mention it here. Dave was one of the creators of DSDM that I covered in my Questioning XP book, so it is nice that he has now come up with a way to talk about methodologies
Obvious - this is the domain of Best Practices, where everyone knows how to operate, so it is process of just doing what everyone knows how to do.
Complicated - this is the domain of learned expertise, there are multiple good answers, but careful analysis might be needed to discover the way forward. One metaphor refers to this as the domain of the bicycle, if it is not working right you can take it apart, discover what is broken and then reassemble it.
Complex - this is the domain where good answers are only discovered in retrospect. A metaphor for this is that of the frog - you cannot take it apart, discover what is wrong and then reassemble it, you have to try different treatments on the whole organism. Dave Snowden talks about Safe to Fail experiments in this domain.
Chaotic - this is the domain of no clear cause and effect, so you just need to take action to try to move out of the chaotic state into one of the other domains.
Disorder - this is the domain of not knowing which domain you are in.
There are some things in software development that fall into the Obvious domain, but mostly there is an existing product or library that handles this domain for you. So if your application needs to store some data, then depending on what the data is, the choice of the filesystem, transient cache, a database or offsite cloud storage will be obvious. There may be some debate as to the flavour and/or vendor of the storage mechanism, but storing data is a known problem with well known solutions.
In part some of my Software Craftsmanship book was raising issues about using techniques that are relevant to the Obvious domain in Software Development. A Factory with a mechanical metaphor is appropriate for the Obvious domain, after all we know how to assemble a car. But the reason we know how to assemble a car is because experts working in the Complicated domain did a lot of Design for Manufacture work on the design of the car so that it could be economically made in a factory. Designing an assembly line is a very complicated process, but once it is built, it is Obvious what you are supposed to do at each work station along the line.
In software development, all of the Obvious domains are well served, so what is left is the Complicated and Complex domains where off the shelf solutions are not available. Looking back up to the image of the Cynefin domains, some Methodologies are better suited to domains that are not very Complicated, bordering on the Obvious, others like Jim Highsmith’s Adaptive Software Development are targeted at working in the Complex domain, with the three project phases of speculation, collaboration and learning.
I love it when software developers say “How hard can it be?!” and decide to build their own complete replacement system. The results are usually about as bad as the first system, for the same reason. To be fair, this stuff is really hard to write – which is all the more reason to be skeptical when someone says they’ll just put together a modular cloud-based version of their own. You should always ask “why do you believe you will get right the things that everyone else got wrong? Because the reasons that they got it wrong apply to you, as well.”
Learning the world, an introduction to SQL for Business Analysts. Uses PostgreSQL but most of the SQL in the book is standard and could be used on any other database. Might need an appendix or web reference for other database specific queries looking at the table catalogs.
When asked how particle physicists address group-think, Gianotti explains instead why some research avenues require large communities.
You would think that sufficiently much has been written about cognitive biases and logical fallacies that even particle physicists took note, but at least the ones I deal with have no clue. If I ask them what measures they take to avoid cognitive biases when evaluating the promise of a research direction, they will either mention techniques to prevent biased data-analysis (different thing entirely), or they will deny that they even have biases (thereby documenting the very problem whose existence they deny).
Sabine Hossenfelder’s book Lost in Math has a lot more about the background to this.
The obvious fun question that arises from this is where are we doing this in software development?
From the “Software Art Thou” series on youtube, this talk covered the idea of ensuring that your entire team has the same understanding of the problem domain.
Talk also references a tool that enables the construction of Concept Maps.
Not got any examples I can share, but the thought occurs that I have seen quite a few projects be stressed of fail due to delays in starting working on the project.
Something to ponder when looking at specific delivery dates and finding that the start of the project is delayed.
One problem with using biometrics as an authentication mechanism is that mere presence is not authentication. Aside from some more gruesome science fiction stories – does the finger with the finger print need to be attached to the rest of the body – there is also the case that just because the finger touched the sensor, it does not mean that the person intended to unlock anything.
Another problem is environmental, when it is -40 or below, who wants to touch anything? Another case is sterile environments – you do not want to touch anything with bare skin after scrubbing up. A related problem exists in industrial environments where hands might be exposed to paint, ink, oil or any of a wide variety of other substances that make reading a finger print unreliable.
Denial of service is also a problem in cases where the relevant print is damaged or hidden due to injury.
Overall, biometrics might be a possible solution for some extreme situations, but for the run of the mill unlocking access to most real life transactions, they do not provide the necessary intentional action or ease of use.
Although Identity Theft has entered the lexicon, it is just sloppy journalism. Nobody is stealing the identity of another person, what they are doing is stealing identifying information about other people. This then becomes a problem because all too many companies, organizations and systems use identifying information as an authentication token.
Ever seen a library system that uses the last four digits of your phone number as your password?
Have banks finally stopped asking for Mother’s Maiden Name?
The problem is that Weak Authentication has become the default for too many companies, organizations and systems, and our legal systems have not put the onus of fixing this in the right place.
Why is it suddenly the victim’s problem when a bad actor takes out a loan in the victim’s name?
It made me wonder if we do similar things in software development. Are we getting better at doing the wrong things? Something like the XML RPC specification that was improved to make the Simple Object Access Protocol specifications, known as SOAP under auspices of the World Wide Web Consortium (W3C). This lead to the need to have tools to write and validate XML Schemas, leading to 1000+ line WSDL files that describe the SOAP end points.
This blog started back in 2006 running under Typo, it had a long run but in 2017 after upgrading the version of ruby it stopped working properly.
Finally got around to fixing it, by upgrading to Publify, the successor to Typo. Remarkably easy just to set it up and them migrate over the data to the new database schema.
One thing I have noticed now that it is running under Rails 5.2.x is that it is much slower to restart and to serve new content than the original version that ran under Rails 2.3.x. Yes, Publify has a lot more features, but since I do not support comments/trackback/ping/twitter etc. on this blog, most of the extra stuff is not used, so what I really notice is that it is much, much slower. Could also be that I have been working with Elixir/Phoenix recently and have got used to the speed of that for development and page rendering, so moving back to Rails just feels slow now.
We are social animals, and we are wired to want to connect, want approval, want to share, and want to organize on the platform where everyone else is, and this, for now, is in Facebook’s advantage. Additionally, it’s hard to say that Facebook is all bad: it does connect people, it has helped organize meetups and events, and it does make the world more interconnected.
But, as Facebook’s users, we and our data are its product. And, as we understand more about how this data is being used, we can still play on Facebook’s playground, by its rules, but be a little smarter about it.
One amusing part of this article is that it is hosted on github, another social sharing platform. It is as if even tech people find it too much trouble to host their own data.
Primary keys are sorted to the top of the table symbols
Lines are thicker on hover to make it easier to select the relevant symbol
Query does not filter out empty tables.
This completes the set of databases that I have made this work for, might include DB2 at some point in the future if I ever work on an IBM system.
For this interactive version, hovering over the lines makes them larger so that you can click to highlight the line. This makes it easy to plan out a query by following the links between the relevant tables, regardless of where they are on the screen. A good example of this would be tracing out which language DVDs are rented out in a specified city? This needs seven tables and six relationships to determine this, and it is much easier to have the path highlighted while writing the query than having to remember the path as you write the query.
The experimental section of the paper is worth a read, and again, you can tell that Matzger’s group has good technique because everyone made it intact to the writing of the manuscript. There are pictures of the crystals themselves, which are very nice, until you realize that they’re plotting to blow you into the ceiling crawl space at the first opportunity. It says that “no unplanned detonations were encountered” during the work, which is a nice distinction.
An intriguing presentation on the effects on sea level rise from ice sheet melting, primarily due to the gravitational pull of the large mass of the existing ice sheets
Spoiler Alert! Next to the ice sheets the sea level can actually fall as a result of the ice melting due to the loss of the gravitational pull from the mass of the ice sheet. It will fall even further over geological times due to the rebound of the crust when the weight of the ice is removed. Canada is rebounding approx. 1mm/yr in response to the removal of the ice sheets from the last ice age.
In every iteration, have a few bugs that do not get fixed. After five or six iterations you can build up a reasonable size bug backlog without even trying, and the best bit is that you can hide them in the previous iterations so nobody important sees them.
Obvious fixes:
If there is anything left over in the current iteration, move it into the next and increase the priority of that item.
Review all items that overflow into the next iteration to make sure that the team understands what is needed.
Publish the failure up the management chain if a defect survives to iterations.
Only problem I’ve identified is that the databases that most need a generated ERD often are lacking in foreign keys that this query uses to identify the relationships…