Friday, December 15, 2017

On Net Neutrality

Given the vote by the FCC on net neutrality, the obvious next move is to sell off all roads to private corporations. These corporations would turn them all into toll roads and would be able to make their own rules. So, for example, The "American Road Corporation" could make a rule limiting all imported cars on its roads to the right lane only and a maximum speed of 35 mph.
Now, you might conclude this is absurd, and you would be right. But it is essentially what the FCC did today.
The correct solution is for the network itself to be controlled by various different levels of government, federal, state and municipalities and they would charge everyone for access to the network. This is, coincidentally, how roads work. Some are federal, we tend to call then Interstates; some are state roads, some are controlled by counties and other by the local municipality. Of course, the physical task of building and maintaining these roads is done by public contractors.
Content providers could provide content to the network to which all subscribers would have access. Some access providers might charge for access to their content, like Netflix. Others, like Facebook, might show ads as part of the content, and normal companies would allow access to their websites for free as a way to reduce their costs of processing billing, etc.
The fundamental problem, from my experience in the US, is that very few people are able to see the basic principles involved. Because cable companies decided to piggy back internet traffic on their coax cables, people just assumed, without thought, that it made sense for a cable company to provide internet access.
If it were suggested that every car manufacturer build their own unique road system, that would immediately be seen as absurd because no one wants 27 different roads running past their front door.
But when it comes to digital transportation, people are willing to accept that solution because no one bothers to analyze the basic problem.
The cable companies also allow telephone traffic over those same cables, but those phone services are controlled by regulations relating to phone companies. So what is the difference between the phone traffic being regulated and the internet traffic being regulated. The answer is none, but because people lack the intellect to break the problem down into its constituent parts, they fail to see the problem
The obvious solution is for government agencies to control access to the network.
If you analyze what a cable company does the answer is nothing at all. I discuss this in more detail below. It takes content created by others, aggregates it and supplies it to consumers. Those consumers have no choice in which cable company they use because that decision is usually done by each municipality. So why should a cable company get paid for providing a consumer access to, say HBO? Shouldn't that transaction be solely between HBO and the consumer? Obviously, if each consumer has to have a separate relationship with every content provider, both the consumer and the content provider would be swamped with paperwork, even if most of it was digital. Therefore, Aggregators would arise who would bundle content into packages and act as an intermediary. They would replace what is done today by the Cable companies. Of course, logic dictates that Cable companies would morph into Aggregators. When I was a boy, it was common to buy meat from a butcher, milk from a dairy, groceries from a grocery and medicines from a pharmacist. But today, we have Aggregators like Kroger, Publix, Stop & Shop, etc. where you can buy everything in one place so the notion of Aggregators is not foreign to us.

If government purchased all the physical infrastructure, content providers would then have direct access to all consumers. Anyone who supports free markets can't possibly be opposed to that. The only conceivable objection is that you want to stifle competition by putting local monopolies in charge of consumers' access to content.

Returning to the role of Cable companies, it's obvious that, at one time they did perform useful work. What they did was to put cable into the ground, or more likely strung on poles so as to make the environment look bad and make the cable vulnerable to storms. They then pushed content from content providers over those cables to consumers. The advantage to consumers was twofold. They had access to far more content than that available on over the air (OTA) broadcasts. The other advantage was, at least in theory, more reliable service. This did require a HUGE investment by these companies, and they need to be compensated for this investment.

But it's worth examining why this happened. Clearly, in any society which was governed by the people and for the people, government would have seen this cable as essential infrastructure and would have worked with private contractors to ensure common standards and universal availability. But we don't live in such a society. Our government agencies have no foresight and so, even now, decades later, there are still many places where cable is not available since the cable companies, being businesses are faced with diminishing returns as they service more remote and sparsely populated areas. It's worth noting that the same happened with electrification in the US and eventually, the government had to step in and say that if you were generating and selling power, you had to make it available to everyone.

Wednesday, August 23, 2017

On Healthcare

Now the healthcare "debate" has subsided, I thought I would state some of my thoughts on the subject.

The president was astounded to discover "Who knew healthcare was this complicated?" Healthcare isn't inherently complicated, we make it so, but more on that later.

Let's also see if we can get some silly notions out of the way first. Mark Meadows, the Chairman of the so called "House Freedom Caucus" recently remarked about single payer "How could we ever afford that?" That has to be one of the dumber statements he has ever made, but again, more on that later. Incidentally, I say the "so called" Freedom Caucus because it is a Republican named concept and Republicans always names things as the opposite of their intent. So their Freedom Caucus is more correctly described as a Tyranny Caucus as in I give you the Freedom to do it exactly the way I tell you.

We hear a lot about health insurance, but let's examine briefly what insurance is. Insurance protects us from drastic losses in the face of unfortunate, but rare occurrences. For example if everyone totaled exactly one car every year, there would be no point in having insurance since it would be about 120% of the price of a new car and we could buy a new car for 100% of the price. The extra 20% would be the profit for the insurance company. Fortunately not everyone totals a car every year and in fact, it is rare for a person to do so. This means that insurance is worth it, because, perhaps, you can pay a known amount of say $1000 per year and in exchange, should you total a $40,000 care you get reimbursed. This works because a large number of people pay for insurance and yet relatively few benefit from it. This, in essence, is socialism. Everyone contributes a reasonable amount, but only the needy receive. Or, as Marx put it "From each according to his abilities, to each according to his needs". So the essence of insurance is socialism even when insurance is available for a profit. In fact, there are several insurance companies that are owned by their customers, so these companies don't have shareholders who take profits. They tend to have the world mutual in their title since they are, in effect, a large club of persons who band together in a socialist enterprise where everyone pays but only those who suffer losses receive. Other companies are for profit, and the shareholders put up the equity and they take a share of the premiums as profit. But for the customers, it is still essentially a socialist enterprise.

Now it's also worth noting that in many cases for many reasons, the government mandates that you have insurance. You have to have insurance if you drive a car. Businesses have to have insurance in case they hurt the public, etc. Generally no one complains about these mandates.

So back to health. Let me first state some obvious facts.
  • Healthy people need less healthcare.
  • Healthy people can work.
  • People who work pay taxes.
  • Having an available pool of healthy workers is good for business.
  • People who are denied access to healthcare often develop chronic conditions that could have been easily prevented.
  • People who don't work can be a drain on society.
It's not particularly hard to understand those simple truths.

Republicans hate to admit that people die from lack of access to healthcare. They assert that in the end everyone gets the care they need. Now this is obviously nonsense, but let's assume for the moment that it is true. Remember earlier how I said Mark Meadows, commenting on single payer had asked how it could possibly have been paid for. I asserted that this was one of his dumbest ever statements. If we believe the Republican line that in the end everyone gets the healthcare they need, then it follows that all the healthcare needed is currently being funded. So essentially he asked "How is it possible for us to pay for what we are already paying for?" and that is obviously an absurd question.

So let's examine what would happen if we had some kind of single payer system which was either run as a government entity or as a form of non profit corporation. First we have to examine how it would be paid for. Well, that's not really very hard since we have already established that enough money is currently flowing into the healthcare system to pay for the current system. So all that has to be done is redirect those current revenue streams towards single payer. So corporations, instead of paying and administering complicated schemes would simply make a similar contribution to the single payer. Government, which already funds medicare and medicaid, would redirect its funds to the single payer system and individuals would be required to contribute to the single payer in a manner that was means tested and subsidized by the government. Of course, it's also possible, but not essential to completely decouple employment from the equation which, incidentally, would leave businesses free to focus on their core competence instead of administering complex health care systems. One could argue that any CEO who hasn't already pleaded with the government to remove from business the onerous task of administering a healthcare system is in fact doing his or her shareholders a disservice.

But let's look at what happens if we do this. Immediately, because the single payer is a non profit organization, the profits currently made by health insurance companies are saved. This probably amounts to between 20% to 30% of the total costs. The main beneficiaries of ObamaCare have been the Insurance companies. They make billions of dollars in profits. In a single payer system these profits are avoided. Further, because you don't have 15 or more companies all running many different policy schemes, each of which needs administering separately, all healthcare would be paid under one scheme thus potentially saving further billions in administrative costs. Additionally, since there is now a single payer, that payer is free to negotiate with the pharmaceutical companies for the best prices, something that is currently prohibited by law in some circumstances by the government at the insistence of the Republicans. It's interesting here to note that people who claim to believe in market forces seem to be opposed to them when the benefit of the market favors the people instead of corporations.

So we have so far shown that we could all get the same healthcare we get now and save money in the process by going to single payer. But that's only the start of the savings.

Because of the benefits of list above of people having access to healthcare we would expect secondary and even tertiary effects to further reduce costs. Because everyone would have ready access to preventative healthcare overall costs would fall as fewer people because seriously sick because their ailments are discovered earlier. These are what I termed the secondary effects.

But because the population as a whole would be healthier, that in turn means more people can and will work. This yields the dual benefits of increasing productivity, adding to GDP, increasing tax revenues all while, at the same time, reducing welfare and social safety net payments.

Last time I checked, the Constitution starts with the words "We The People" so it's not unreasonable to infer that the country should be run by and for the benefit of the people, not corporations. While the United States has by far the best technology and innovation in its advanced research hospitals, it has to be acknowledged that the population of the Unites States effectively has access to one of the poorer healthcare systems in the developed world.This is just an objective fact. Do some research and see where the US ranks against other advanced nations in categories like life expectancy, infant mortality and other important measures. You will find that of the 34 members of the OECD the US, far from ranking #1 is typically in the mid to lower 20s and yet the amount spent on healthcare per capita in the US is by far the highest. It's about time the country that claims to be the best for business started to deliver the best healthcare for its citizens.

Saturday, August 19, 2017

On the Firing of Steve Bannon

Bannon's gone

Let me try and explain this.
Donald Trump grew up in a racist home with a racist father and went to work in a racist real estate firm. Now Trump doesn't have the intellect to think deeply and as such he is largely devoid of ideology. He is driven by impulse and his impulses are to be rude, to be nasty, to stiff people and to con people.
So he's blundering and blustering his way through the primaries and managing to stay alive for 2 reasons. First that the primary system is inherently undemocratic and second because the Republican primary voter set has only a very small intersection with the set of intelligent reasonable people.
But then we also have Steve Bannon. Steve's overall goal is to destroy totally the United States as we know it. He runs a website dedicated to this, Breitbart. He then realizes that he can achieve his goals with the help of all the nastiest people in the US which he calls the "alt-right" and promises them a home at Breitbart. It's not clear whether Bannon himself is racist or anti-Semitic but it doesn't really matter if he can use them.
He observes this crazy Donald Trump and identifies him as a man with no principles, no moral compass, and sees that he can appeal to Donald's nasty instincts to both help Trump and at the same time, use Trump to destroy the US, which, remember, is Bannon's ultimate goal.
It's important to remember that Trump doesn't take advice and dislikes being told what to do. In order to have bankrupted casinos in New Jersey he must have had to ignore the advice of many people trying to help him. But Trump is so arrogant that he would rather go broke on his own that listen to others and make a billion dollars.
So then we have Charlottesville. Trump makes an anemic weak statement on Saturday, but then is pressured to read a prepared statement from a teleprompter on Monday. It was obvious that he didn't mean a word of it. But since he had listened to advice, he was furious with himself for being so weak as to take advice. So Tuesday he melted down.
After that, it became obvious that Trump himself has all the nasty views and supports neo Nazis and the KKK. But another of Trump's traits is that he can never admit a mistake. So now he fires Bannon in the hope everyone will be fooled into thinking he has cleaned house because the problem was Bannon.
Let's be clear, he hasn't. The problem is Trump himself. And he still has Stephen Miller and Sebastian Gorka who have to be two of the most repugnant men on the planet.
So there you have it. I doubt the media will get it at all and will be relieved that Bannon has gone and that Trump has turned a corner.

Sunday, February 22, 2015

More thoughts on TPF

I thought I would take the opportunity to shatter a few other myths about TPF and Assembler. Let me start with a disclaimer. My last experience with TPF was in 1992 while working on the Confirm project, before it was run into the ground by Max Hopper and his side-kick Dave Harms. That was my escape from TPF so if things have changed since then, so be it.

TPF code has to be re-entrant

It's taken as gospel that code written for TPF has to be re-entrant and as part of this myth it is assumed that writing such code is difficult. Since TPF never interrupts a running ECB and switches to another ECB, code does not have to be re-entrant. It does, however have to be serially reusable between implied wait SVC calls. This means that you can, if you should so desire, store variables in the program space as long as you don't expect it to survive an SVC call. This means that it is also possible to write self modifying code, as long as the code is corrected before an SVC. This is, of course, appalling practice, but I have seen examples of this. One common example is that because the usage count of a program is maintained in the program header, it is sometimes incremented by the program to ensure that it does not get unloaded. It does, however have several drawbacks. Because programs are modified, it means that any attempt to load programs into storage which can only be read by ECBs would cause programs to fail. In ideal operating systems, all programs would be loaded into read only storage so they cannot be modified, either deliberately or inadvertently.

Since it is the responsibility of the Operating System when to load and unload programs, the information it uses to determine that, i.e., the usage count should be maintained in protected storage available to the OS only. While working at Danbury on the implementation of 4K blocks and Centralized List Handling (CLH) this was a change I tied to make, but it was determined that we would break too many badly behaved programs so it was dropped.

Structured Code is always less efficient

I regard this as a classic case of muddled thinking. Whenever I have found that it is hard to write structured code I have always discovered, after much thought, that it's because I was trying to solve the problem incorrectly. If you adequately define the problem, the solution can always be structured nicely. This makes it easier to follow. It is true that using the Structured Programming Macros (SPM) did, on occasions lead to branches to branches to branches as the indentation unwound. However, if you structure the code correctly, then you can be sure that this happens in the least used path. I remember that we rewrote the CPU loop in SPM as a part of CLH. The TPF bigots were appalled that the code eventually led to 4 branches being executed in a row. This was used as a classic case of how inefficient SPM was. What the simple minded failed to realize that while this was indeed true, it occurred in a path that ended after the final branch to an LPSW that loaded a wait state. Thus it is true that 3 unnecessary branches were taken on route to doing..... nothing!

But this only leads to a further conclusion. It is often the case that code written in a high level language can be more efficient than code written in assembler, Someone on a thread in TPF'ers said that some of the best assembler code he had ever seen was generated by C/C++ compilers. This is to be expected, even if it is counter intuitive. A compiler, looking at the big picture wouldn't generate branches to branches - it would generate code to go directly to the end. But this would be done without damaging the structure of the code.

Let me give another example of where a compiler can generate better code than a good assembler programmer.

*       It's been so long, forgive me if I make some syntactical errors
*       Some Return codes - defined in some macro somewhere
OK      EQU   0
BAD     EQU   OK+1
VBAD    EQU   BAD+1
.
.
.

* Some code to process the return code that is returned in register 0
        CH    0,=AL2(OK)
        BNZ   ERROR
...
ERROR   Code continues.

This code is trivial, but it exemplifies some good practices. The values OK, BAD and VBAD are not hard coded but EQU is used. This means that if you change the values, the code will continue to work as intended, always assuming that the code setting the return code is using the same set of equates. But, I hear you say, it would be more efficient to use LTR 0,0 to compare for zero. Indeed it would, but if you do that, then the code breaks if the value for OK is changed. But now let's look at what happens in a high level language.

    enum { OK, BAD, VBAd };
.
.
.
    if (return_code == OK)
.
.
.

In this case, the compiler, knowing at compile time that OK is equal to 0, can generate an LTR itself, but if you change the enum, the code generated next time will be different. But it doesn't break. This is a trivial example but if you extend it to situations where the compiler might need to multiply a number by a power of 2, since it knows that at compile time it can generate a SLL rather than using a MH. You can't do that in assembler because if the number you wish to multiply by changes from 2 to 3 due to an unrelated change in a DSECT somewhere, the assembler code would break.

There are many more examples where a compiler can generate more efficient code. Consider if you wish to test a condition and do one thing if it's true and another if it's false. Over time, it's conceivable that mods are made to both paths independently. It's conceivable that over time, both paths contain identical pieces of code. Because the compiler is looking at the bigger picture, it could see these common paths and extract them into the common path. Remember that code that looks totally different to you might look identical to a compiler. It's even possible that you might notice the similarity in the assembler code and extract it yourself. But if you did that you would be falling into the trap of "coincidental binding". But that would be a mistake because the code would be extremely difficult to follow.

It is for reasons like this that compilers often have a switch to inhibit optimization for debugging purposes. Sometimes the code generated by compilers can be extremely difficult to follow due to optimization. But it's very useful to have in production.

4K blocks

When we implemented 4K blocks, I fought very hard to make the block 4096 bytes long instead of 4095 bytes. My thinking was that at some time in the future, someone would realize that it was ridiculous to split a program up into components that would each fit within a 4K block and instead it would be possible to write a program many kilobytes or even megabytes long and simply load them into multiple 4K blocks and using the virtual addressing make them all appear to be contiguous. I envisioned systems that instead of constantly fetching programs from DASD they would be completely loaded into RAM at boot times. Even though this wasn't possible at that time, I thought limiting the block to 4095 instead of 4096 just made absolutely no sense, but such was the shortsightedness of the TPF development team at Danbury and of Bob Dryfus in particular, that the blocks became 4095. It's always puzzled me why TPFers in general have this tremendous instinct never to look forward and always hanker after the past.

I don't blame the people who designed PARS back in the 1960s. The world was very different then and the art of software was very undeveloped. And computers were so lame back then and memory so small that every byte used or instruction executed mattered. But today things are different. In the next room I have a PC server with 24 GB RAM and dual quad core processors. We buy disks with capacities measured in terabytes that fit in the palm of your hand. My guess is that the entire program base of a TPF system would fit easily on to a single SSD hard drive. It simply doesn't make sense to continue to use 1960s or even 1980s software technology in 2015.

Saturday, February 21, 2015

A rant about TPF

In a post on Facebook in the TPF'ers group, Jerri Peterson said "Assembler was always my true love. How do programmers truly know what they are doing if they don't understand assembler!????" While it is certainly true that to understand fully how computers work, a knowledge of an Assembler like language is essential. However, a knowledge of Assembler is far from sufficient to become a good programmer. The experience of TPF shows that there are large numbers of people familiar with Assembler who have very few skills when it comes to programming. One of the problems with the TPF mentality is that an obsession with writing clever assembler was considered vastly preferable to a thorough understanding of algorithms. Everyone always raves about how efficient TPF is and quote large numbers of transactions per second as proof. Of course, what few understand is that the TPF idea of a transaction would be considered laughable by most in the IT business. MD, MU, 1* etc are all considered by TPF to be transactions. When building a PNR each entry, i.e., the name, address, telephone number etc. is considered a transaction. In the real world, only entries like ET are counted as real transactions because that is when some permanent mark is left in the database. But it doesn't matter because it is essential indoctrinate the new TPFer into the infallibility of TPF. The reality is somewhat different. Some of the worst examples of programming I have ever seen have been by people in TPF assembler.

Let me give a couple of examples. Years ago, while working on Amadeus in Miami they were having trouble running schedule change. This was in testing and it all worked, but when tried with typical volumes that would be required in production, each nightly run was taking 40 hours. This is clearly unacceptable. The System One assembler gurus pored over the code and were just unable to tweak it. It was all written in assembler and it couldn't be improved. IBM introduced someone from DC who studied the program and went away to rewrite it in PL/1. The TPFers scoffed at the very idea. But he came back with a program that worked and ran in , as I recall, about 6 hours. So how did a program written in PL/1 manage to defeat an extremely efficient assembler program? The answer is simple. It used intelligent algorithms. The assembler program maintained the schedules as one huge array in RAM (called core in those days) and since this was for many carriers, the size of the array was measured in megabytes. This was back in the day when 64 MB of RAM was an amazing number. Every time an entry was inserted into the schedule, all the schedules below the insertion point had to be moved down and when a schedule was deleted, all following had to be moved up to fill the gap. Even though it was written efficiently, the cycles required to move vast amounts of data slowed the program to a crawl. So instead, the PL/1 program maintained a slice of storage for every schedule but instead of moving the data, he maintained a chained list. Each item in the list maintained a pointer to the next item and a pointer to the previous item. So now, to remove an item from the list he simply ran down the chain till he found the item that needed deleting and made the previous one point to the next one and didn't move any data. Similarly to add an item, it was added to the end of storage and the appropriate pointers were updated, This wasn't particularly clever, but it demonstrated that really thinking about what went on was much more important than minimizing path length.

Before I even came to the US, back in the mid 70s I was working for a company which had won a contract to produce for a large national company a network that would allow all of their terminals to access any of their mainframes. We developed an X.25 based protocol. Each computer on the network either interfaced with terminals or with mainframes or was just a node for routing packets. The contract called for the nodes to be Ferranti Argos mini computers which booted from cassette tapes and had no other peripherals, just RAM. The project called for these nodes to be capable of switching packets at, as I recall, 240 packets per second. The project was staffed by some TPF people drawn from British Airways and from non TPF people. I was a rarity in that I had had a lot of non TPF experience but following the merger of BEA and BOAC had moved into TPF. Thus I had experience from both sides. It was a condition of the contract that all development be done in a high level language and that it follow good structured programming guidelines. As you can imagine, it was a terrific battle to get the TPFers to embrace the notion that the most important thing was to write good, structured code and that performance would be looked at later, after it was working. After the project was complete and was working we then set about testing throughput. We did this by arranging 4 nodes in a square and injecting a message into a node destined for a remote node. The node would do its routing and decide where to route it but then the code was patched to send it to the next one in the square. This way we did all the routing work but the packet would end up flying round and round in circles. Periodically we would inject another packet into the node until the node couldn't handle the load. Since we were counting the number of packets switched per second we were able to ascertain that number after the test. From memory our first attempt yielded somewhere around 80 packets per second. There was much "I told you so" from the TPFers but what we did was add a trap into the timer interrupt to record the equivalent of the PSW at every timer interrupt. Examining this later allowed us to draw a bar graph of where most of the CPU was going. This was a message passing OS and so to communicate between separate processes a message was copied from the memory of the sending process to the memory of the receiving process. This routine was written, like the rest of the OS in a high level language. We changed this one routine to assembler changing it from a for loop to the equivalent of a BCT loop and reran the test. The next test showed packets per second of around 160. Repeating this process to eliminate the next 2 or three significant bottlenecks meant that we easily and quickly met our targets and without destroying the integrity of the coding.

It may have been necessary, back in 1960 but, TPF has being using obsolete techniques for decades and has steadfastly fought any efforts at rational design.

There is an insistence withing the TPF community that everything must be done incorrectly. I remember having huge arguments with the people at Danbury over the ATTAC, DETAC macros. They decided, in their infinite wisdom that if you attempted to DETAC a level that was not currently in use, it would issue a dump. Not only did this cause unnecessary code, because the OS had to check for it instead of just making a copy. This also caused unnecessary programming in programs that were used as utilities. A program that could be called by any program in order to perform a task couldn't just DETAC say level 15 and then ATTAC at the end. Instead it had to check to see if there was anything on level 15. If there was it could DETAC, but if not it needn't bother. But now it has to remember whether it did a DETAC or not, so it has to set a bit in the new core block it acquired for its work so that it knows whether to do an ATTAC at the end. Pushing and popping from stacks are fundamentals in Programming 101 and yet they were not understood even in the 90s.

There are other examples of obsession with efficiency causing problems. I remember an old man in Danbury, we'll call him Frank, who decided that a STM or LM was OK if you were doing more than 4 registers, but for 4 or less multiple Ls or STs were quicker, so he went through the CP and changed them all. What he overlooked and we found out the hard way was that:

         LM      1,4,0{1) is NOT the same as
         L       1,0(1)
         L       2,4(1)
         L       3,8,(1)
         L       4,12(1)

It's encouraging to see TPF embracing C but let's remember that C is now hopelessly obsolete. Object Oriented programming is now universally understood. Anyway, that's the end of my rant, Your mileage may vary but I really wish people would learn that the ability to develop a new system quickly and reliably is much more important than clever assembly programming.

Friday, June 7, 2013

Spying on US phone records, etc.

I find the latest uproar about the Verizon calls and "PRISM" mildly amusing. Many liberals are saying that it's not so much that the data are being acquired but that the programs are secret that bothers them.

This really has to give comfort to al Qaeda. Apparently, it's OK for us to protect ourselves from al Qaeda but ONLY if we explain exactly, in detail, how we plan to do it. This, naturally, will give al Qaeda the ability to develop counter measures. Personally, I am opposed to that, but I guess it would somehow be un-American to put terrorists at a disadvantage.

The fact that the Federal government has access to all calls placed and their duration bothers me a lot less than the fact that AT&T or Verizon et al has this information in the first place. Corporations in the US have an expressed evil intent to make as much money as possible from the personal information they gather from us. In contrast, the government wants this information to keep us safe. I find it a lot easier to believe the government is on my side that I believe that any corporation is on my side.

As an aside, in the old days, when we were charged by the duration and distance of a telephone call, it made sense for the telephone company to record this information for billing purposes. Now that the vast majority of plans are flat rated, the telephone company really has no reason to gather this information at all. My guess would be that collecting and storing this information is a significant part of the cost of providing the service. Given that I have no real interest in this data, I think that at the very least, if the government wants access to such information for law enforcement purposes, then it should pay for it and my bill could be reduced accordingly.

I accept that the government needs oversight - we can't just trust government always never to misuse the information that is so gained. But, alas, the First Amendment guarantees that the government MUST keep the press in the dark, since if the press learns of a program, there is nothing to stop them printing it. Imagine, for a moment, how D-Day would have turned out in 1944 had the press printed the plans of the Normandy Invasion several days before. I believe some sort of scheme is needed where journalists pick a small number of representatives who will represent the general press. These special journalists could then be briefed fully on such programs, including on the rationality for the program and for the need to keep it secret. These journalists would be expected to keep the secrets but we, the people, could be assured that respected journalists were indeed being briefed. Clearly, if these journalists, discover that they are being kept in the dark, then the government has broken the deal and they are free to share and publish what they know. Of course, the administration should also be overseen by the Congress and by the Courts, but we don't seem to trust either of those institutions very much, which is rather sad, when you think about it.

The PRISM program seems to have generated even more outrage, but this is baffling to me. Of course, most people, including journalists, appear not to bother to read even their own stories. But, as far as I understand this program, it appears to be aimed at people accessing, let's use Gmail as an example, Gmail from overseas to communicate with other persons oversees or even in the US. Now, imagine if you are the head of al Qaeda. You could set up an alqaeda.com server and give all your operatives emails at alqaeda.com but something tells me the US and other governments might get wind of that. So instead, you think to yourself, why don't we all get nondescript accounts from Gmail and we can access them from around the world and Google is brilliant at keeping its servers running 24/7. Plus, and this is a big plus, we know that the US government is prohibited by its own constitution from monitoring all those emails. So now you have, as the boss of al Qaeda. set up a reliable worldwide email system that your enemy isn't allowed to monitor! Brilliant. So if you work for the FBI in counter terrorism and if you have a brain, this might occur to you. And so you think to yourself that as long as you target the communications of non US residents, then I can safely monitor such sites. But, of course, the last thing you want to do is to let the terrorists know that you are monitoring their emails, but the press makes sure that you know.

So it seems to me that we have two choices. And the choice really is up to us. On the one hand we can insist that programs and surveillance like this stop, or we let it continue, albeit being monitored by some appropriate method. But if we choose the first option, then we also have to agree that we won't cry and protest and get all distraught if a terrorist attack succeeds.

Thursday, June 6, 2013

What is freedom?

The recent Supreme Court decision that allows people who have been arrested to have a sample of DNA taken has caused a lot of controversy. Does this help or hinder "freedom"? We hear all the time on TV about "our freedoms" and always in the context that they are a "good thing". It was alleged that al Qaeda hated us for "our freedoms". So I pose the question "What is freedom?" On one level freedom is obvious, but in reality it's a lot more complicated. In the US, we have been taught to believe that we live in a "free society". Is this the result of indoctrination, wishful thinking, or examination of the facts?

I think we can fairly say that in comparison to the old USSR or to North Korea, we are a free society. I can't imagine anyone arguing with that. But what if we compare ourselves to Europe or Australia or Canada - are we freer than they are or not? I assert that depends very much on your definition of freedom. On the face of it, we would appear to be less free. After all, the incarceration rate in the "land of the free" is higher than any peer with which we would wish to be compared. At 5% pf the world's population we have 25% of the prisoners. So clearly, larger numbers of Americans are less free then their counterparts in Europe, etc. And by less free in this context, I am not comparing the quality of freedom, but that Americans are actually incarcerated which is unarguably less free than not being in prison.

But are people in the US otherwise free from government intervention and control. Again, no. The US government actually prohibits its citizens, for example, from visiting Cuba. Most governments might warn their citizens against travel to certain countries, advising them that consular aid may not be available or that governments are known to be hostile. But a government of free citizens can hardly prohibit travel, and yet the US does. So it's far from obvious that people in the US are freer or less free than their counterparts.

As I see it there are two distinct forms of freedom - the freedom to and the freedom from. So deciding relative freedoms involves balancing the weight of freedom to and freedom from. And this is where the US has distinctly more freedom to, but the Europeans, Canadians and Australians have more freedom from. So, for example, in the "less free" nations you are unlikely to be sitting in your house and just randomly murdered by drive-by shooting or some random gang violence, or even in a home invasion. But this is achieved by restricting the freedom to buy as many guns of any types as you like. In reality, I think most of us, within reason, prefer freedom from things. We like that people aren't allowed to be a nuisance, or that people aren't allowed to build really ugly buildings within inches of our own homes. Some people will see it as an infringement of liberties that they aren't allowed to have a raucous party in the middle of the street at 2 am.

So what about taking DNA samples? I can't see that this is fundamentally any different from taking fingerprints or even taking a mug shot. That mug shot or fingerprint stays on file forever and is used to see if prints found at a crime scene match someone previously arrested. Now, it's true that this does seriously impact your freedom to murder people, since it does mean that you are more likely now to be apprehended. So this might act as a deterrent. On the other hand, insofar as it does act as a deterrent, this could significantly increase your freedom from, freedom from rape and murder. So, on balance, I support this decision. The fact that Antonin Scalia dissents may be the most compelling reason of all to support it.