Creating a simple Elixir application to test database connectivity to a legacy Oracle database (SCOTT) usign jamdb_oracle. Sorry for the wall of text, but could not find this clearly documented anywhere else, so putting it out here in case I ever need to find it again
C:\Dev\lelixir>mix new ora --sup
* creating README.md
* creating .formatter.exs
* creating .gitignore
* creating mix.exs
* creating config
* creating config/config.exs
* creating lib
* creating lib/ora.ex
* creating lib/ora/application.ex
* creating test
* creating test/test_helper.exs
* creating test/ora_test.exs
Your Mix project was created successfully.
You can use "mix" to compile it, test it, and more:
cd ora
mix test
Run "mix help" for more commands.
C:\Dev\lelixir>cd ora
C:\Dev\lelixir\ora>
Setup for Oracle, using jamdb_oracle, need to edit ./mix.exs , adding in the extra applications that need to run
# Run "mix help compile.app" to learn about applications.
def application do
[
extra_applications: [:logger, :ecto, :jamdb_oracle],
mod: {Ora.Application, []}
further down add in the dependencies, specifying the versions the application wants from Hex.pm.
# Run "mix help deps" to learn about dependencies.
defp deps do
[
{:ecto, "~> 3.0"},
{:jamdb_oracle, "~>0.3.2"}
Then need to run commands to get the dependencies and to compile them. Note that there are extras pulled in when the library you request also has dependencies.
Followed by compilation, I got some warnings here, but still worked later on.
C:\Dev\lelixir\ora>mix compile
==> base64url (compile)
Compiled src/base64url.erl
==> connection
Compiling 1 file (.ex)
Generated connection app
==> jose
Compiling 90 files (.erl)
Compiling 8 files (.ex)
warning: function Poison.EncodeError.exception/1 is undefined
(module Poison.EncodeError is not available)
lib/jose/poison/lexical_encoder.ex:8
Generated jose app
===> Compiling telemetry
==> decimal
Compiling 1 file (.ex)
Generated decimal app
==> db_connection
Compiling 16 files (.ex)
Generated db_connection app
==> ecto
Compiling 54 files (.ex)
Generated ecto app
==> ecto_sql
Compiling 25 files (.ex)
Generated ecto_sql app
==> jamdb_oracle
Compiling 5 files (.erl)
Compiling 3 files (.ex)
warning: function table_exists_query/1 required by behaviour Ecto.Adapters.SQL.Connection
is not implemented (in module Ecto.Adapters.Jamdb.Oracle.Connection)
lib/jamdb_oracle_ecto.ex:138
Generated jamdb_oracle app
==> ora
Compiling 2 files (.ex)
Generated ora app
After that had to type mix ecto.gen.repo, Which gave the output
warning: could not find Ecto repos in any of the apps: [:ora].
You can avoid this warning by passing the -r flag or by setting the
repositories managed by those applications in your config/config.exs:
config :ora, ecto_repos: [...]
** (Mix) ecto.gen.repo expects the repository to be given as -r MyApp.Repo
Which required the following edits to ./config/config.exs
# This file is responsible for configuring your application
# and its dependencies with the aid of the Mix.Config module.
use Mix.Config
config :ora, Ora.Repo,
database: "SCOTT", # original Oracle test database
username: "user",
password: "pass",
hostname: "db.domain.name",
port: 1521 # default oracle port
config :ora, ecto_repos: [Ora.Repo]
rerunning the command was successful, with the message
* creating lib/ora
* creating lib/ora/repo.ex
* updating config/config.exs
Don't forget to add your new repo to your supervision tree
(typically in lib/ora/application.ex):
# For Elixir v1.5 and later
{Ora.Repo, []}
# For Elixir v1.4 and earlier
supervisor(Ora.Repo, [])
And to add it to the list of ecto repositories in your
configuration files (so Ecto tasks work as expected):
config :ora,
ecto_repos: [Ora.Repo]
At this point, ./lib/ora/repo.ex needed a minor edit to use the jamdb adapter
defmodule Ora.Repo do
use Ecto.Repo,
otp_app: :ora,
adapter: Ecto.Adapters.Jamdb.Oracle
end
and ./lib/ora/application.ex needed
def start(_type, _args) do
# List all child processes to be supervised
children = [
{Ora.Repo, []}
# Starts a worker by calling: Ora.Worker.start_link(arg)
# {Ora.Worker, arg}
]
At this point, OK to test it out
iex -S mix
Compiling 4 files (.ex)
Generated ora app
Interactive Elixir (1.8.1) - press Ctrl+C to exit (type h() ENTER for help)
iex(1)> alias Ora.Repo
Ora.Repo
iex(2)> import Ecto.Query, only: [from: 2]
Ecto.Query
iex(3)> query = from e in "emp", where: e.ename == "SMITH", select: e.empno
#Ecto.Query
iex(4)> Repo.all(query)
16:10:06.896 [debug] QUERY OK source="emp" db=15.0ms
SELECT n0.empno FROM emp n0 WHERE (n0.ename = 'SMITH') []
[7369]
iex(5)>
Mission accomplished, connected to legacy Oracle database (SCOTT) using Elixir and jamdb_oracle
Recently I have been looking at Erlang and Elixir, and in the process was reading Coders at Work and came across this quote from Joe Armstrong (pg 213)
I think the lack of reusability comes in object-oriented languages, not in functional languages. Because the problem with object-oriented languages is they’ve got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.
If you have referentially transparent code, if you have pure functions –all the data comes in its input arguments and everything goes out and laves no data behind – it’s incredibly reusable. You can just reuse it here, there and everywhere…
When an audience member, tiring of this foggy talk, asked if there was anything concrete that blockchains could offer the NHS, they responded that asking for practical uses of Blockchain was “like trying to predict Facebook in 1993.” The main takeaway for the health care sector people I was with was swearing never to use said accounting firm for anything whatsoever that wasn’t accounting.
What these academics are not doing is asking the questions that society needs answered to decide what the role of driverless cars will be.
Ashley Nunes suggests
This leads to something many academics overlook: driverless does not mean humanless. My research on the history of technology suggests that such advances might reduce the need for human labour, but it seldom, if ever, eliminates that need entirely. Regulators in the United States and elsewhere have never signed off on the use of algorithms crucial to safety without there being some accompanying human oversight. Rather than rehashing decisions from Philosophy 101, more academics should educate themselves on the history of the technology and the regulatory realities that surround its use.
They have created platforms of unimaginable complexity. But if they’re not sick to their stomach about what has happened in Myanmar or overwhelmed by guilt about how their platforms were used by Russian intelligence to subvert their own country’s democracy, or sickened by their own role in what happened in New Zealand, they’re not fit to hold these jobs or wield this unimaginable power.
The argument that they are just a common carrier not responsible for the content does not fly. They are wittingly allowing propaganda, agitprop and other unwelcome content to be disseminated around the world, and benefiting by getting advertisers to pay to be associated with the content. The common carrier excuse worked in for some industries, where the carrier was transmitting content from known providers, but now that the carrier is enabling the publishing and broadcast from any random internet connection, facebook is now the publisher and the carrier. Hence it is responsible for the content.
If it cannot make a valid business model out of properly curated content, then too bad, it does not deserve to exist. There are many business models that society does not allow to exist, and publishing/promoting propaganda, agitprop and dubious content is one of the business models that needs to be controlled by society.
Twitter probably falls into the same category with the way the platform promotes extreme political rhetoric. There is reasonable evidence that multiple elections around the world have been adversely affected by the various social media platforms over the past several years, so before outside influences create more havoc, societies around the world need to come up with a strategy to deal with social media companies that profit from spreading information designed to decrease the quality of life for everyone.
Dave Snowden is writing up the Definitive History of the Cynefin Framework, so I thought it was time to mention it here. Dave was one of the creators of DSDM that I covered in my Questioning XP book, so it is nice that he has now come up with a way to talk about methodologies
Obvious - this is the domain of Best Practices, where everyone knows how to operate, so it is process of just doing what everyone knows how to do.
Complicated - this is the domain of learned expertise, there are multiple good answers, but careful analysis might be needed to discover the way forward. One metaphor refers to this as the domain of the bicycle, if it is not working right you can take it apart, discover what is broken and then reassemble it.
Complex - this is the domain where good answers are only discovered in retrospect. A metaphor for this is that of the frog - you cannot take it apart, discover what is wrong and then reassemble it, you have to try different treatments on the whole organism. Dave Snowden talks about Safe to Fail experiments in this domain.
Chaotic - this is the domain of no clear cause and effect, so you just need to take action to try to move out of the chaotic state into one of the other domains.
Disorder - this is the domain of not knowing which domain you are in.
There are some things in software development that fall into the Obvious domain, but mostly there is an existing product or library that handles this domain for you. So if your application needs to store some data, then depending on what the data is, the choice of the filesystem, transient cache, a database or offsite cloud storage will be obvious. There may be some debate as to the flavour and/or vendor of the storage mechanism, but storing data is a known problem with well known solutions.
In part some of my Software Craftsmanship book was raising issues about using techniques that are relevant to the Obvious domain in Software Development. A Factory with a mechanical metaphor is appropriate for the Obvious domain, after all we know how to assemble a car. But the reason we know how to assemble a car is because experts working in the Complicated domain did a lot of Design for Manufacture work on the design of the car so that it could be economically made in a factory. Designing an assembly line is a very complicated process, but once it is built, it is Obvious what you are supposed to do at each work station along the line.
In software development, all of the Obvious domains are well served, so what is left is the Complicated and Complex domains where off the shelf solutions are not available. Looking back up to the image of the Cynefin domains, some Methodologies are better suited to domains that are not very Complicated, bordering on the Obvious, others like Jim Highsmith’s Adaptive Software Development are targeted at working in the Complex domain, with the three project phases of speculation, collaboration and learning.
I love it when software developers say “How hard can it be?!” and decide to build their own complete replacement system. The results are usually about as bad as the first system, for the same reason. To be fair, this stuff is really hard to write – which is all the more reason to be skeptical when someone says they’ll just put together a modular cloud-based version of their own. You should always ask “why do you believe you will get right the things that everyone else got wrong? Because the reasons that they got it wrong apply to you, as well.”
Learning the world, an introduction to SQL for Business Analysts. Uses PostgreSQL but most of the SQL in the book is standard and could be used on any other database. Might need an appendix or web reference for other database specific queries looking at the table catalogs.
When asked how particle physicists address group-think, Gianotti explains instead why some research avenues require large communities.
You would think that sufficiently much has been written about cognitive biases and logical fallacies that even particle physicists took note, but at least the ones I deal with have no clue. If I ask them what measures they take to avoid cognitive biases when evaluating the promise of a research direction, they will either mention techniques to prevent biased data-analysis (different thing entirely), or they will deny that they even have biases (thereby documenting the very problem whose existence they deny).
Sabine Hossenfelder’s book Lost in Math has a lot more about the background to this.
The obvious fun question that arises from this is where are we doing this in software development?
From the “Software Art Thou” series on youtube, this talk covered the idea of ensuring that your entire team has the same understanding of the problem domain.
Talk also references a tool that enables the construction of Concept Maps.
Not got any examples I can share, but the thought occurs that I have seen quite a few projects be stressed of fail due to delays in starting working on the project.
Something to ponder when looking at specific delivery dates and finding that the start of the project is delayed.
One problem with using biometrics as an authentication mechanism is that mere presence is not authentication. Aside from some more gruesome science fiction stories – does the finger with the finger print need to be attached to the rest of the body – there is also the case that just because the finger touched the sensor, it does not mean that the person intended to unlock anything.
Another problem is environmental, when it is -40 or below, who wants to touch anything? Another case is sterile environments – you do not want to touch anything with bare skin after scrubbing up. A related problem exists in industrial environments where hands might be exposed to paint, ink, oil or any of a wide variety of other substances that make reading a finger print unreliable.
Denial of service is also a problem in cases where the relevant print is damaged or hidden due to injury.
Overall, biometrics might be a possible solution for some extreme situations, but for the run of the mill unlocking access to most real life transactions, they do not provide the necessary intentional action or ease of use.
Although Identity Theft has entered the lexicon, it is just sloppy journalism. Nobody is stealing the identity of another person, what they are doing is stealing identifying information about other people. This then becomes a problem because all too many companies, organizations and systems use identifying information as an authentication token.
Ever seen a library system that uses the last four digits of your phone number as your password?
Have banks finally stopped asking for Mother’s Maiden Name?
The problem is that Weak Authentication has become the default for too many companies, organizations and systems, and our legal systems have not put the onus of fixing this in the right place.
Why is it suddenly the victim’s problem when a bad actor takes out a loan in the victim’s name?
It made me wonder if we do similar things in software development. Are we getting better at doing the wrong things? Something like the XML RPC specification that was improved to make the Simple Object Access Protocol specifications, known as SOAP under auspices of the World Wide Web Consortium (W3C). This lead to the need to have tools to write and validate XML Schemas, leading to 1000+ line WSDL files that describe the SOAP end points.
This blog started back in 2006 running under Typo, it had a long run but in 2017 after upgrading the version of ruby it stopped working properly.
Finally got around to fixing it, by upgrading to Publify, the successor to Typo. Remarkably easy just to set it up and them migrate over the data to the new database schema.
One thing I have noticed now that it is running under Rails 5.2.x is that it is much slower to restart and to serve new content than the original version that ran under Rails 2.3.x. Yes, Publify has a lot more features, but since I do not support comments/trackback/ping/twitter etc. on this blog, most of the extra stuff is not used, so what I really notice is that it is much, much slower. Could also be that I have been working with Elixir/Phoenix recently and have got used to the speed of that for development and page rendering, so moving back to Rails just feels slow now.
We are social animals, and we are wired to want to connect, want approval, want to share, and want to organize on the platform where everyone else is, and this, for now, is in Facebook’s advantage. Additionally, it’s hard to say that Facebook is all bad: it does connect people, it has helped organize meetups and events, and it does make the world more interconnected.
But, as Facebook’s users, we and our data are its product. And, as we understand more about how this data is being used, we can still play on Facebook’s playground, by its rules, but be a little smarter about it.
One amusing part of this article is that it is hosted on github, another social sharing platform. It is as if even tech people find it too much trouble to host their own data.
Primary keys are sorted to the top of the table symbols
Lines are thicker on hover to make it easier to select the relevant symbol
Query does not filter out empty tables.
This completes the set of databases that I have made this work for, might include DB2 at some point in the future if I ever work on an IBM system.
For this interactive version, hovering over the lines makes them larger so that you can click to highlight the line. This makes it easy to plan out a query by following the links between the relevant tables, regardless of where they are on the screen. A good example of this would be tracing out which language DVDs are rented out in a specified city? This needs seven tables and six relationships to determine this, and it is much easier to have the path highlighted while writing the query than having to remember the path as you write the query.