Friday, December 15, 2017
On Net Neutrality
Wednesday, August 23, 2017
On Healthcare
The president was astounded to discover "Who knew healthcare was this complicated?" Healthcare isn't inherently complicated, we make it so, but more on that later.
Let's also see if we can get some silly notions out of the way first. Mark Meadows, the Chairman of the so called "House Freedom Caucus" recently remarked about single payer "How could we ever afford that?" That has to be one of the dumber statements he has ever made, but again, more on that later. Incidentally, I say the "so called" Freedom Caucus because it is a Republican named concept and Republicans always names things as the opposite of their intent. So their Freedom Caucus is more correctly described as a Tyranny Caucus as in I give you the Freedom to do it exactly the way I tell you.
We hear a lot about health insurance, but let's examine briefly what insurance is. Insurance protects us from drastic losses in the face of unfortunate, but rare occurrences. For example if everyone totaled exactly one car every year, there would be no point in having insurance since it would be about 120% of the price of a new car and we could buy a new car for 100% of the price. The extra 20% would be the profit for the insurance company. Fortunately not everyone totals a car every year and in fact, it is rare for a person to do so. This means that insurance is worth it, because, perhaps, you can pay a known amount of say $1000 per year and in exchange, should you total a $40,000 care you get reimbursed. This works because a large number of people pay for insurance and yet relatively few benefit from it. This, in essence, is socialism. Everyone contributes a reasonable amount, but only the needy receive. Or, as Marx put it "From each according to his abilities, to each according to his needs". So the essence of insurance is socialism even when insurance is available for a profit. In fact, there are several insurance companies that are owned by their customers, so these companies don't have shareholders who take profits. They tend to have the world mutual in their title since they are, in effect, a large club of persons who band together in a socialist enterprise where everyone pays but only those who suffer losses receive. Other companies are for profit, and the shareholders put up the equity and they take a share of the premiums as profit. But for the customers, it is still essentially a socialist enterprise.
Now it's also worth noting that in many cases for many reasons, the government mandates that you have insurance. You have to have insurance if you drive a car. Businesses have to have insurance in case they hurt the public, etc. Generally no one complains about these mandates.
So back to health. Let me first state some obvious facts.
- Healthy people need less healthcare.
- Healthy people can work.
- People who work pay taxes.
- Having an available pool of healthy workers is good for business.
- People who are denied access to healthcare often develop chronic conditions that could have been easily prevented.
- People who don't work can be a drain on society.
Saturday, August 19, 2017
On the Firing of Steve Bannon
Bannon's gone
Let me try and explain this.Donald Trump grew up in a racist home with a racist father and went to work in a racist real estate firm. Now Trump doesn't have the intellect to think deeply and as such he is largely devoid of ideology. He is driven by impulse and his impulses are to be rude, to be nasty, to stiff people and to con people.
So he's blundering and blustering his way through the primaries and managing to stay alive for 2 reasons. First that the primary system is inherently undemocratic and second because the Republican primary voter set has only a very small intersection with the set of intelligent reasonable people.
But then we also have Steve Bannon. Steve's overall goal is to destroy totally the United States as we know it. He runs a website dedicated to this, Breitbart. He then realizes that he can achieve his goals with the help of all the nastiest people in the US which he calls the "alt-right" and promises them a home at Breitbart. It's not clear whether Bannon himself is racist or anti-Semitic but it doesn't really matter if he can use them.
He observes this crazy Donald Trump and identifies him as a man with no principles, no moral compass, and sees that he can appeal to Donald's nasty instincts to both help Trump and at the same time, use Trump to destroy the US, which, remember, is Bannon's ultimate goal.
It's important to remember that Trump doesn't take advice and dislikes being told what to do. In order to have bankrupted casinos in New Jersey he must have had to ignore the advice of many people trying to help him. But Trump is so arrogant that he would rather go broke on his own that listen to others and make a billion dollars.
So then we have Charlottesville. Trump makes an anemic weak statement on Saturday, but then is pressured to read a prepared statement from a teleprompter on Monday. It was obvious that he didn't mean a word of it. But since he had listened to advice, he was furious with himself for being so weak as to take advice. So Tuesday he melted down.
After that, it became obvious that Trump himself has all the nasty views and supports neo Nazis and the KKK. But another of Trump's traits is that he can never admit a mistake. So now he fires Bannon in the hope everyone will be fooled into thinking he has cleaned house because the problem was Bannon.
Let's be clear, he hasn't. The problem is Trump himself. And he still has Stephen Miller and Sebastian Gorka who have to be two of the most repugnant men on the planet.
So there you have it. I doubt the media will get it at all and will be relieved that Bannon has gone and that Trump has turned a corner.
Sunday, February 22, 2015
More thoughts on TPF
I thought I would take the opportunity to shatter a few other myths about TPF and Assembler. Let me start with a disclaimer. My last experience with TPF was in 1992 while working on the Confirm project, before it was run into the ground by Max Hopper and his side-kick Dave Harms. That was my escape from TPF so if things have changed since then, so be it.
TPF code has to be re-entrant
It's taken as gospel that code written for TPF has to be re-entrant and as part of this myth it is assumed that writing such code is difficult. Since TPF never interrupts a running ECB and switches to another ECB, code does not have to be re-entrant. It does, however have to be serially reusable between implied wait SVC calls. This means that you can, if you should so desire, store variables in the program space as long as you don't expect it to survive an SVC call. This means that it is also possible to write self modifying code, as long as the code is corrected before an SVC. This is, of course, appalling practice, but I have seen examples of this. One common example is that because the usage count of a program is maintained in the program header, it is sometimes incremented by the program to ensure that it does not get unloaded. It does, however have several drawbacks. Because programs are modified, it means that any attempt to load programs into storage which can only be read by ECBs would cause programs to fail. In ideal operating systems, all programs would be loaded into read only storage so they cannot be modified, either deliberately or inadvertently.
Since it is the responsibility of the Operating System when to load and unload programs, the information it uses to determine that, i.e., the usage count should be maintained in protected storage available to the OS only. While working at Danbury on the implementation of 4K blocks and Centralized List Handling (CLH) this was a change I tied to make, but it was determined that we would break too many badly behaved programs so it was dropped.
Structured Code is always less efficient
I regard this as a classic case of muddled thinking. Whenever I have found that it is hard to write structured code I have always discovered, after much thought, that it's because I was trying to solve the problem incorrectly. If you adequately define the problem, the solution can always be structured nicely. This makes it easier to follow. It is true that using the Structured Programming Macros (SPM) did, on occasions lead to branches to branches to branches as the indentation unwound. However, if you structure the code correctly, then you can be sure that this happens in the least used path. I remember that we rewrote the CPU loop in SPM as a part of CLH. The TPF bigots were appalled that the code eventually led to 4 branches being executed in a row. This was used as a classic case of how inefficient SPM was. What the simple minded failed to realize that while this was indeed true, it occurred in a path that ended after the final branch to an LPSW that loaded a wait state. Thus it is true that 3 unnecessary branches were taken on route to doing..... nothing!
But this only leads to a further conclusion. It is often the case that code written in a high level language can be more efficient than code written in assembler, Someone on a thread in TPF'ers said that some of the best assembler code he had ever seen was generated by C/C++ compilers. This is to be expected, even if it is counter intuitive. A compiler, looking at the big picture wouldn't generate branches to branches - it would generate code to go directly to the end. But this would be done without damaging the structure of the code.
Let me give another example of where a compiler can generate better code than a good assembler programmer.
* It's been so long, forgive me if I make some syntactical errors * Some Return codes - defined in some macro somewhere OK EQU 0 BAD EQU OK+1 VBAD EQU BAD+1 . . . * Some code to process the return code that is returned in register 0 CH 0,=AL2(OK) BNZ ERROR ... ERROR Code continues.
This code is trivial, but it exemplifies some good practices. The values OK, BAD and VBAD are not hard coded but EQU is used. This means that if you change the values, the code will continue to work as intended, always assuming that the code setting the return code is using the same set of equates. But, I hear you say, it would be more efficient to use LTR 0,0 to compare for zero. Indeed it would, but if you do that, then the code breaks if the value for OK is changed. But now let's look at what happens in a high level language.
enum { OK, BAD, VBAd }; . . . if (return_code == OK) . . .
In this case, the compiler, knowing at compile time that OK is equal to 0, can generate an LTR itself, but if you change the enum, the code generated next time will be different. But it doesn't break. This is a trivial example but if you extend it to situations where the compiler might need to multiply a number by a power of 2, since it knows that at compile time it can generate a SLL rather than using a MH. You can't do that in assembler because if the number you wish to multiply by changes from 2 to 3 due to an unrelated change in a DSECT somewhere, the assembler code would break.
There are many more examples where a compiler can generate more efficient code. Consider if you wish to test a condition and do one thing if it's true and another if it's false. Over time, it's conceivable that mods are made to both paths independently. It's conceivable that over time, both paths contain identical pieces of code. Because the compiler is looking at the bigger picture, it could see these common paths and extract them into the common path. Remember that code that looks totally different to you might look identical to a compiler. It's even possible that you might notice the similarity in the assembler code and extract it yourself. But if you did that you would be falling into the trap of "coincidental binding". But that would be a mistake because the code would be extremely difficult to follow.
It is for reasons like this that compilers often have a switch to inhibit optimization for debugging purposes. Sometimes the code generated by compilers can be extremely difficult to follow due to optimization. But it's very useful to have in production.
4K blocks
When we implemented 4K blocks, I fought very hard to make the block 4096 bytes long instead of 4095 bytes. My thinking was that at some time in the future, someone would realize that it was ridiculous to split a program up into components that would each fit within a 4K block and instead it would be possible to write a program many kilobytes or even megabytes long and simply load them into multiple 4K blocks and using the virtual addressing make them all appear to be contiguous. I envisioned systems that instead of constantly fetching programs from DASD they would be completely loaded into RAM at boot times. Even though this wasn't possible at that time, I thought limiting the block to 4095 instead of 4096 just made absolutely no sense, but such was the shortsightedness of the TPF development team at Danbury and of Bob Dryfus in particular, that the blocks became 4095. It's always puzzled me why TPFers in general have this tremendous instinct never to look forward and always hanker after the past.
I don't blame the people who designed PARS back in the 1960s. The world was very different then and the art of software was very undeveloped. And computers were so lame back then and memory so small that every byte used or instruction executed mattered. But today things are different. In the next room I have a PC server with 24 GB RAM and dual quad core processors. We buy disks with capacities measured in terabytes that fit in the palm of your hand. My guess is that the entire program base of a TPF system would fit easily on to a single SSD hard drive. It simply doesn't make sense to continue to use 1960s or even 1980s software technology in 2015.
Saturday, February 21, 2015
A rant about TPF
In a post on Facebook in the TPF'ers group, Jerri Peterson said "Assembler was always my true love. How do programmers truly know what they are doing if they don't understand assembler!????" While it is certainly true that to understand fully how computers work, a knowledge of an Assembler like language is essential. However, a knowledge of Assembler is far from sufficient to become a good programmer. The experience of TPF shows that there are large numbers of people familiar with Assembler who have very few skills when it comes to programming. One of the problems with the TPF mentality is that an obsession with writing clever assembler was considered vastly preferable to a thorough understanding of algorithms. Everyone always raves about how efficient TPF is and quote large numbers of transactions per second as proof. Of course, what few understand is that the TPF idea of a transaction would be considered laughable by most in the IT business. MD, MU, 1* etc are all considered by TPF to be transactions. When building a PNR each entry, i.e., the name, address, telephone number etc. is considered a transaction. In the real world, only entries like ET are counted as real transactions because that is when some permanent mark is left in the database. But it doesn't matter because it is essential indoctrinate the new TPFer into the infallibility of TPF. The reality is somewhat different. Some of the worst examples of programming I have ever seen have been by people in TPF assembler.
Let me give a couple of examples. Years ago, while working on Amadeus in Miami they were having trouble running schedule change. This was in testing and it all worked, but when tried with typical volumes that would be required in production, each nightly run was taking 40 hours. This is clearly unacceptable. The System One assembler gurus pored over the code and were just unable to tweak it. It was all written in assembler and it couldn't be improved. IBM introduced someone from DC who studied the program and went away to rewrite it in PL/1. The TPFers scoffed at the very idea. But he came back with a program that worked and ran in , as I recall, about 6 hours. So how did a program written in PL/1 manage to defeat an extremely efficient assembler program? The answer is simple. It used intelligent algorithms. The assembler program maintained the schedules as one huge array in RAM (called core in those days) and since this was for many carriers, the size of the array was measured in megabytes. This was back in the day when 64 MB of RAM was an amazing number. Every time an entry was inserted into the schedule, all the schedules below the insertion point had to be moved down and when a schedule was deleted, all following had to be moved up to fill the gap. Even though it was written efficiently, the cycles required to move vast amounts of data slowed the program to a crawl. So instead, the PL/1 program maintained a slice of storage for every schedule but instead of moving the data, he maintained a chained list. Each item in the list maintained a pointer to the next item and a pointer to the previous item. So now, to remove an item from the list he simply ran down the chain till he found the item that needed deleting and made the previous one point to the next one and didn't move any data. Similarly to add an item, it was added to the end of storage and the appropriate pointers were updated, This wasn't particularly clever, but it demonstrated that really thinking about what went on was much more important than minimizing path length.
Before I even came to the US, back in the mid 70s I was working for a company which had won a contract to produce for a large national company a network that would allow all of their terminals to access any of their mainframes. We developed an X.25 based protocol. Each computer on the network either interfaced with terminals or with mainframes or was just a node for routing packets. The contract called for the nodes to be Ferranti Argos mini computers which booted from cassette tapes and had no other peripherals, just RAM. The project called for these nodes to be capable of switching packets at, as I recall, 240 packets per second. The project was staffed by some TPF people drawn from British Airways and from non TPF people. I was a rarity in that I had had a lot of non TPF experience but following the merger of BEA and BOAC had moved into TPF. Thus I had experience from both sides. It was a condition of the contract that all development be done in a high level language and that it follow good structured programming guidelines. As you can imagine, it was a terrific battle to get the TPFers to embrace the notion that the most important thing was to write good, structured code and that performance would be looked at later, after it was working. After the project was complete and was working we then set about testing throughput. We did this by arranging 4 nodes in a square and injecting a message into a node destined for a remote node. The node would do its routing and decide where to route it but then the code was patched to send it to the next one in the square. This way we did all the routing work but the packet would end up flying round and round in circles. Periodically we would inject another packet into the node until the node couldn't handle the load. Since we were counting the number of packets switched per second we were able to ascertain that number after the test. From memory our first attempt yielded somewhere around 80 packets per second. There was much "I told you so" from the TPFers but what we did was add a trap into the timer interrupt to record the equivalent of the PSW at every timer interrupt. Examining this later allowed us to draw a bar graph of where most of the CPU was going. This was a message passing OS and so to communicate between separate processes a message was copied from the memory of the sending process to the memory of the receiving process. This routine was written, like the rest of the OS in a high level language. We changed this one routine to assembler changing it from a for loop to the equivalent of a BCT loop and reran the test. The next test showed packets per second of around 160. Repeating this process to eliminate the next 2 or three significant bottlenecks meant that we easily and quickly met our targets and without destroying the integrity of the coding.
It may have been necessary, back in 1960 but, TPF has being using obsolete techniques for decades and has steadfastly fought any efforts at rational design.
There is an insistence withing the TPF community that everything must be done incorrectly. I remember having huge arguments with the people at Danbury over the ATTAC, DETAC macros. They decided, in their infinite wisdom that if you attempted to DETAC a level that was not currently in use, it would issue a dump. Not only did this cause unnecessary code, because the OS had to check for it instead of just making a copy. This also caused unnecessary programming in programs that were used as utilities. A program that could be called by any program in order to perform a task couldn't just DETAC say level 15 and then ATTAC at the end. Instead it had to check to see if there was anything on level 15. If there was it could DETAC, but if not it needn't bother. But now it has to remember whether it did a DETAC or not, so it has to set a bit in the new core block it acquired for its work so that it knows whether to do an ATTAC at the end. Pushing and popping from stacks are fundamentals in Programming 101 and yet they were not understood even in the 90s.
There are other examples of obsession with efficiency causing problems. I remember an old man in Danbury, we'll call him Frank, who decided that a STM or LM was OK if you were doing more than 4 registers, but for 4 or less multiple Ls or STs were quicker, so he went through the CP and changed them all. What he overlooked and we found out the hard way was that:
LM 1,4,0{1) is NOT the same as L 1,0(1) L 2,4(1) L 3,8,(1) L 4,12(1)
It's encouraging to see TPF embracing C but let's remember that C is now hopelessly obsolete. Object Oriented programming is now universally understood. Anyway, that's the end of my rant, Your mileage may vary but I really wish people would learn that the ability to develop a new system quickly and reliably is much more important than clever assembly programming.
Friday, June 7, 2013
Spying on US phone records, etc.
Thursday, June 6, 2013
What is freedom?
I think we can fairly say that in comparison to the old USSR or to North Korea, we are a free society. I can't imagine anyone arguing with that. But what if we compare ourselves to Europe or Australia or Canada - are we freer than they are or not? I assert that depends very much on your definition of freedom. On the face of it, we would appear to be less free. After all, the incarceration rate in the "land of the free" is higher than any peer with which we would wish to be compared. At 5% pf the world's population we have 25% of the prisoners. So clearly, larger numbers of Americans are less free then their counterparts in Europe, etc. And by less free in this context, I am not comparing the quality of freedom, but that Americans are actually incarcerated which is unarguably less free than not being in prison.
But are people in the US otherwise free from government intervention and control. Again, no. The US government actually prohibits its citizens, for example, from visiting Cuba. Most governments might warn their citizens against travel to certain countries, advising them that consular aid may not be available or that governments are known to be hostile. But a government of free citizens can hardly prohibit travel, and yet the US does. So it's far from obvious that people in the US are freer or less free than their counterparts.
As I see it there are two distinct forms of freedom - the freedom to and the freedom from. So deciding relative freedoms involves balancing the weight of freedom to and freedom from. And this is where the US has distinctly more freedom to, but the Europeans, Canadians and Australians have more freedom from. So, for example, in the "less free" nations you are unlikely to be sitting in your house and just randomly murdered by drive-by shooting or some random gang violence, or even in a home invasion. But this is achieved by restricting the freedom to buy as many guns of any types as you like. In reality, I think most of us, within reason, prefer freedom from things. We like that people aren't allowed to be a nuisance, or that people aren't allowed to build really ugly buildings within inches of our own homes. Some people will see it as an infringement of liberties that they aren't allowed to have a raucous party in the middle of the street at 2 am.
So what about taking DNA samples? I can't see that this is fundamentally any different from taking fingerprints or even taking a mug shot. That mug shot or fingerprint stays on file forever and is used to see if prints found at a crime scene match someone previously arrested. Now, it's true that this does seriously impact your freedom to murder people, since it does mean that you are more likely now to be apprehended. So this might act as a deterrent. On the other hand, insofar as it does act as a deterrent, this could significantly increase your freedom from, freedom from rape and murder. So, on balance, I support this decision. The fact that Antonin Scalia dissents may be the most compelling reason of all to support it.