List of Archived Posts

2010 Newsgroup Postings (08/02 - 09/04)

IBM zEnterprise
History of Hard-coded Offsets
Five Theses on Security Protocols
Five Theses on Security Protocols
History of Hard-coded Offsets
Memory v. Storage: What's in a Name?
Five Theses on Security Protocols
GSM eavesdropping
Who is Really to Blame for the Financial Crisis?
Who is Really to Blame for the Financial Crisis?
History of Hard-coded Offsets
Idiotic programming style edicts
U.S. operators take on credit cards with contactless payment trial
Is the ATM still the banking industry's single greatest innovation?
Facebook doubles the size of its first data center
History of Hard-coded Offsets
Region Size - Step or Jobcard
Age
Region Size - Step or Jobcard
AT&T, Verizon to Target Visa, MasterCard With Phones
Age
Mainframe Hall of Fame (MHOF)
CSC History
A mighty fortress is our PKI, Part II
Little-Noted, Prepaid Rules Would Cover Non-Banks As Wells As Banks
Idiotic programming style edicts
CSC History
Mainframe Hall of Fame (MHOF)
CSC History
Are we spending too little on security? Or are we spending too much??
AT&T, Verizon to Target Visa, MasterCard With Phones
Are we spending too little on security? Or are we spending too much??
Idiotic programming style edicts
What will Microsoft use its ARM license for?
Hardware TLB reloaders
RISC design, was What will Microsoft use its ARM license for?
A Bright Future for Big Iron?
A Bright Future for Big Iron?
U.K. bank hit by massive fraud from ZeuS-based botnet
CPU time variance
Oracle: The future is diskless!
IBM 3883 Manuals
IBM 3883 Manuals
IBM 3883 Manuals
IBM 3883 Manuals
Basic question about CPU instructions
ptimizing compilers
OT: Found an old IBM Year 2000 manual
New U.S. Treasury Rule Would Add Millions to Prepaid Ranks
Announcement from IBMers: 10000 and counting
Has there been a change in US banking regulations recently?
Has there been a change in US banking regulations recently?
Basic question about CPU instructions
Is the ATM still the banking industry's single greatest innovation?
BM Unleashes 256-core Unix Server, Its Biggest Yet
z millicode: where does it reside?
About that "Mighty Fortress"... What's it look like?
Has there been a change in US banking regulations recently
memes in infosec IV - turn off HTTP, a small step towards "only one mode"
z196 sysplex question
towards https everywhere and strict transport security
diotic cars driving themselves
Dodd-Frank Act Makes CEO-Worker Pay Gap Subject to Disclosure
32nd AADS Patent, 24Aug2010
How Safe Are Online Financial Transactions?
How Safe Are Online Financial Transactions?
Win 3.11 on Broadband
Idiotic programming style edicts
Idiotic programming style edicts
z/VM LISTSERV Query
owards https everywhere and strict transport security (was: Has there been a change in US banking regulations recently?)
Win 3.11 on Broadband
Idiotic programming style edicts
Idiotic programming style edicts
z millicode: where does it reside?
towards https everywhere and strict transport security (was: Has there been a change in US banking regulations recently?)
towards https everywhere and strict transport security (was: Has there been a change in US banking regulations recently?)
towards https everywhere and strict transport security (was: Has there been a change in US banking regulations recently?)
ainframe Hall of Fame (MHOF)
Idiotic take on Bush tax cuts expiring
3270 Emulator Software
Nostalgia
Nearly $1,000,000 stolen electronically from the University of Virginia
3270 Emulator Software
et numbers off permanently
3270 Emulator Software
Set numbers off permanently
Nearly $1,000,000 stolen electronically from the University of Virginia
Baby Boomer Execs: Are you afraid of LinkedIn & Social Media?
UAE Man-in-the-Middle Attack Against SSL

IBM zEnterprise

From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: IBM zEnterprise
Newsgroups: bit.listserv.ibm-main
Date: 2 Aug 2010 07:21:59 -0700
shmuel+ibm-main@PATRIOT.NET (Shmuel Metz , Seymour J.) writes:
TYM when did memory become storage. Certainly the use of "memory" predates the S/360.

there was big deal with announcement for 370 virtual memory ... about having to change all "virtual memory" references to "virtual storage" references ... and resulting DOS/VS, VS1, VS2, etc. vague fading memory was the excuse given had something to do with patents or copyright.

cp67 (history) wiki page
https://en.wikipedia.org/wiki/CP/CMS

in the above article ... there is some amount of FUD claimed with regard to mention of M44/44X ... some claiming that it is little more than claiming os/360 supervisor services (SVC interface) provides an abstract virtual environment.

--
virtualization experience starting Jan1968, online at home since Mar1970

History of Hard-coded Offsets

Refed: **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: History of Hard-coded Offsets
Newsgroups: bit.listserv.ibm-main
Date: 2 Aug 2010 07:41:58 -0700
rfochtman@YNC.NET (Rick Fochtman) writes:
Most of those geometry-related "System Services" didn't exist! :-)

E4 channel command (extended sense) was introduced to start providing device characteristics (theoretically starting to minimize the amount of device information that had to be provided in system/IO sysgens).

Then, FBA further drastically simplified (compared to CKD) ... eliminating having to know/specify anything ... recent comment
https://www.garlic.com/~lynn/2010l.html#76 History of Hard-code Offsets

Recently there has been some issues/nits related to FBA ... regarding efforts to migrate from 512 byte to 4096 byte sectors (which might be the closest analogy to all the CKD transitions) ... aka things like

Linux Not Fully Prepared for 4096-Byte Sector Hard Drives
http://www.osnews.com/story/22872/Linux_Not_Fully_Prepared_for_4096-Byte_Sector_Hard_Drives

but it is the first major change since the 70s (other than some transitions involving total number of blocks/sectors supported).

wiki page discussing some of the issues:
https://en.wikipedia.org/wiki/Advanced_Format

a lot of the discussion (aligned & multiples) in the above is almost identical to issues I faced when I did the page-mapped enhancements to cp67/cms filesystem (nearly 40yrs ago) ... some past posts
https://www.garlic.com/~lynn/submain.html#mmap

--
virtualization experience starting Jan1968, online at home since Mar1970

Five Theses on Security Protocols

Refed: **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 02 Aug, 2010
Subject: Re: Five Theses on Security Protocols
MailingList: Cryptography
re:
https://www.garlic.com/~lynn/2010l.html#72 A slight modification of my comments on PKI
https://www.garlic.com/~lynn/2010l.html#77 Five Theses on Security Protocols
https://www.garlic.com/~lynn/2010l.html#79 Five Theses on Security Protocols

One of the other issues in the current payment paradigm ... with or without certificates ... the end-user as relying party, is frequently not in control of the risks & security measures related to their assets (fraudulent transactions against their accounts).

This shows up with what kind of fraud gets publicity (at least before the cal. state breach notification legislation) ... namely the kind that consumer has some control over ... lost/stolen cards ... and/or recognizing "add-on" ATM cash machine skimmers. There was almost no publicity about breaches and/or instances were skimmers were installed in machines at point of manufacture ... since about the only corrective action that consumers would have (in such cases), was to stop using the card altogether.

I was one of the co-authors for the financial industry X9.99 privacy standard ... and one of the most difficult concepts to get across was that the institution wasn't providing security for protecting the institutions' assets ... but providing security to protect assets of other entities (it required rethink by security departments about what was being protecting from whom ... in some cases it even required the institution to protect consumer assets from the institution itself).
https://www.garlic.com/~lynn/subpubkey.html#privacy

We were somewhat tangentially involved in the cal. state data breach notification legislation ... having been brought in to help wordsmith the cal. state electronic signature legislation.
https://www.garlic.com/~lynn/subpubkey.html#signature

Several of the participants were also heavily involved in privacy issues and had done in-depth, detailed consumer/public surveys ... where the number one issue came up as "identity theft" ... primarily the form involving fraudulent financial transactions ("account fraud") from information harvested in breaches. There seemed to be little or no activity in correcting problems related to breaches ... so they appeared to think that the data breach notifications might prompt corrective action (aka ... the crooks would perform fraudulent financial transactions with institutions other than the one that had the data breach ... if nothing else to put minimize LEOs determining the source of the information). As a result ... institutions having breaches experienced very little downside and any correcti ve action was pure cost w/o any direct benefit to the institution (at least prior to data breach notification).
https://www.garlic.com/~lynn/subintegrity.html#harvest

Part of the paradigm changes around x9.59 financial transaction standard, minimized the institutions (that had little direct interest in protecting your information) from having to protect your information. Besides security proportional to risk and parameterised risk management ... this also has the concept that the parties at risk, have increased control over the actual protection mechanisms (a security failure mode is trying to mandate for parties, with little or no vested interest/risk, be responsible for the security measures).
https://www.garlic.com/~lynn/x959.html#x959

There is an analogy scenario in the recent financial mess ... involving environment where institutional parties were motivated to do the wrong thing. Congressional testimony pointed out that it is much more effective to change business process environment where the parties have vested interest to do the right thing ... as opposed to all the regulations in the world ... attempting to manage an environment where the parties have a vested interest to do the wrong thing.

--
virtualization experience starting Jan1968, online at home since Mar1970

Five Theses on Security Protocols

From: lynn@garlic.com (Lynn Wheeler)
Date: 02 Aug, 2010
Subject: Re: Five Theses on Security Protocols
MailingList: Cryptography
re:
https://www.garlic.com/~lynn/2010l.html#82 Five Theses on Security Protocols
https://www.garlic.com/~lynn/2010l.html#83 Five Theses on Security Protocols

minor addenda about speeds & feeds concerning the example of mid-90s payment protocol specification that had enormous PKI/certificate bloat ... and SSL.

Part of the original SSL security was predicated on the user understanding the relationship between the webserver they thought they were talking to, and the corresponding URL. They would enter that URL into the browser ... and the browser would then establish that the URL corresponded to the webserver being talked to (both parts were required in order to create an environment where the webserver you thot you were talking to, was, in fact, the webserver you were actually talking to). This requirement was almost immediately violated when merchant servers found that using SSL for the whole operation cost them 90-95% of their thruput. As a result, the merchants dropped back to just using SSL for the payment part and having a user click on a check-out/payment button. The (potentially unvalidated, counterfeit) webserver now provides the URL ... and SSL has been reduced to just validating that the URL corresponds to the webserver being talked to (or validating that the webserver being talked to, is the webserver that it claims to be; i.e. NOT validating that the webserver is the one you think you are talking to).
https://www.garlic.com/~lynn/subpubkey.html#sslcerts

Now, the backend of the SSL payment process was SSL connection between the webserver and a "payment gateway" (sat on the internet and acted as gateway to the payment networks). Moderate to heavy load, avg. transaction elapsed time (at payment gateway, thru payment network) round-trip was under 1/3rd of second. Avg. roundtrip at merchant servers could be a little over 1/3rd of second (depending on internet connection between the webserver and the payment gateway).
https://www.garlic.com/~lynn/subnetwork.html#gateway

I've referenced before doing BSAFE benchmarks for the PKI/certificate bloated payment specification ... and using a speeded up BSAFE library ... the people involved in the bloated payment specification claimed the benchmark numbers were 100 times too slow (apparently believing that standard BSAFE library at the time ran nearly 1000 times faster than it actually did).

When pilot code (for the enormously bloated PKI/certificate specification) was finally available, using BSAFE library (speedup enhancements had been incorporated into standard distribution) ... dedicated pilot demos for transaction round trip could take nearly minute elapsed time ... effectively all of it was BSAFE computations (using dedicated computers doing nothing else).

Merchants that found using SSL for the whole consumer interaction would have required ten to twenty times the number of computers ... to handle equivalent non-SSL load ... were potentially being faced with needing hundreds of additional computers to handle just the BSAFE computational load (for the mentioned extremely PKI/certificate bloated payment specification) ... and still wouldn't be able to perform the transaction anywhere close to the elapsed time of the implementation being used with SSL.
https://www.garlic.com/~lynn/subpubkey.html#bloat

--
virtualization experience starting Jan1968, online at home since Mar1970

History of Hard-coded Offsets

Refed: **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: History of Hard-coded Offsets
Newsgroups: bit.listserv.ibm-main
Date: 2 Aug 2010 14:03:47 -0700
re:
https://www.garlic.com/~lynn/2010l.html#76 History of Hard-coded Offsets
https://www.garlic.com/~lynn/2010m.html#1 History of Hard-coded Offsets

the thing that even made the 3390 statement possible was the existance of the whole additional complexity of the CKD virtualization layer on top of FBA devices; aka there is no longer any direct relationship between what the system thinks of as DASD CKD geometry and what actually exists (something that the underlying FBA paradigm had done for the rest of the industry back in the 70s).

misc. past posts on the subject of FBA, CKD, multi-track searches, etc
https://www.garlic.com/~lynn/submain.html#dasd

as I've commented before ... I was told that providing fully integrated and tested MVS FBA support, it would still cost an additional $26M for education, training, publications to ship ... and the business justification had to show incremental profit that more than covered that $26M; aka on the order of $300M or so in additional DASD sales directly attributable to FBA support (then the claim was that customers were buying DASD as fast as it could be built ... and the only thing that FBA support would do, was switch same amount of disk sales from CKD to FBA). It was not allowed to use life-cycle cost savings ... either internal cost savings and/or customer cost savings as business justification for shipping FBA support.

--
virtualization experience starting Jan1968, online at home since Mar1970

Memory v. Storage: What's in a Name?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Memory v. Storage: What's in a Name?
Newsgroups: bit.listserv.ibm-main
Date: 3 Aug 2010 06:41:52 -0700
Kees.Vernooij@KLM.COM (Vernooij, CP - SPLXM) writes:
If inventing a good name is one thing, reusing it is apparently still better. I know at least 3 IBM products/features that were/are called Hydra. Apparently this is a 'monster'ly well working term.

re:
https://www.garlic.com/~lynn/2010m.html#0 IBM zEnterprise

in the same time frame as the "virtual memory" to "virtual storage" change ... there was also work on online computing for DOS/VS and VS1 that was to be called "personal computing option" (PCO; aka sort of entry version of TSO).

they viewed the work on morph of cp67/cms to vm370/cms as competition. some part of the PCO group had written a "simulator" and would frequently publish thruput benchmarks ... that the vm370/cms group was then required to do "real" benchmarks ... showing equivalent operation (although doing real benchmarks consumed significant percentage of the total development group resources compared to the trivial effort required of the PCO resources). The vm370/cms benchmarks were sometimes better and sometimes worse than the PCO simulated numbers. However, when PCO was finally operational, it turned out that their real thruput numbers was something like 1/10th of what had been claimed for the simulated numbers.

Before announce, they decided that PCO had to be renamed (to VSPC) ... since PCO was also label of a political organization in France. misc. past posts referencing PCO:
https://www.garlic.com/~lynn/2001f.html#49 any 70's era supercomputers that ran as slow as today's supercompu
https://www.garlic.com/~lynn/2001i.html#30 IBM OS Timeline?
https://www.garlic.com/~lynn/2002h.html#51 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002q.html#26 LISTSERV Discussion List For USS Questions?
https://www.garlic.com/~lynn/2003k.html#0 VSPC
https://www.garlic.com/~lynn/2005p.html#38 storage key question
https://www.garlic.com/~lynn/2005q.html#19 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2010f.html#72 Subpools - specifically 241

ACM reference ... '81 IBM JofR&D article using "memory"
http://portal.acm.org/citation.cfm?id=1664864

but there are article references in the above, including '66 IBM System Journal article using "storage"

but from people at science center (also above references), '71 system journal and '72 JofR&D articles using "memory". misc. past posts mentioning science center
https://www.garlic.com/~lynn/subtopic.html#545tech

later, people doing above articles turn out an IBM (science center) product called vs/repack ... which did semi-automated program reorganization for paged, virtual memory environment. Its program analysis was also used for things like "hot-spot" identification. The program had been used extensively by other product groups (like IMS) as part of improving performance in virtual memory environment.

i distincly remember, in the runup to virtual memory being announced for 370, corporate pushing hard line about all "virtual memory" references being change to "virtual storage" (although the memory is fading about the reason for such change).

I had done a lot with virtual memory and paging algorithms (for cp67) as undergraduate in the 60s ... which was incorporated in various products over the years. Later at Dec81 ACM SIGOPS conference ... I was approached by former colleague about helping somebody get their PHD at Stanford (on virtual memory algorithms). There had been other academic work in the 60s on virtual memory algorithms ... which was nearly opposite of what I had done (and the subject of the '81 Stanford PHD work) ... and there was strong opposition from those academic quarters on awarding the PHD. For some reason there was management opposition to my providing supporting information for the Stanford PHD, that delayed my being able to respond for nearly a year (regarding my 60s undergraduate work) ... copy of part of the old response
https://www.garlic.com/~lynn/2006w.html#email821019

for other topic drift ... somebody's recent blog entry regarding the science center
http://smartphonestechnologyandbusinessapps.blogspot.com/2010/05/bob-creasy-invented-virtual-machines-on.html

--
virtualization experience starting Jan1968, online at home since Mar1970

Five Theses on Security Protocols

Refed: **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 03 Aug, 2010
Subject: Re: Five Theses on Security Protocols
MailingList: Cryptography
re:
https://www.garlic.com/~lynn/2010l.html#82 Five Theses on Security Protocols
https://www.garlic.com/~lynn/2010l.html#83 Five Theses on Security Protocols
https://www.garlic.com/~lynn/2010m.html#2 Five Theses on Security Protocols

so possibly "responsibility proportional to vested interest" &/or "vested interest proportional to responsibility" (in addition to security proportional to risk & parameterised risk management).

in the payment process ... the transaction information (involved in majority of data breach news because of the fraudulent transaction financial motivation) at risk at the merchants and/or transaction processor ... isn't their information at risk ... it is the public's information at risk. w/o various regulation there is little vested interest for those parties to protect assets that don't belong to them.

the analogy in the recent financial mess were unregulated loan originators being able to package loans (w/o regard to loan quality or borrowers' qualifications) into toxic CDOs and pay rating agencies for triple-A ratings (when both the CDO sellers and the rating agencies knew that the toxic CDOs weren't worth the triple-A rating; from fall2008 congressional testimony). The loan originators are the responsible parties for the loans ... but being able to unload every loan (as triple-A rated toxic CDO) eliminated all their risk (some proposals have been floated that loan originators have to retain some financial interest in the loans they originate).

as mentioned, x9.59 retail payment financial standard took a different approach ... by slightly tweaking the paradigm ... and eliminated the risk associated with those data breaches ... and therefor dependencies on parties that have no direct motivation in protecting the associated information (didn't try and use regulation and other means to force protection of assets at risk by parties that have no interest ... but eliminated the risk associated with those assets ... and therefor any requirement to force parties w/o direct vested interest, to provide security/protection).

--
virtualization experience starting Jan1968, online at home since Mar1970

GSM eavesdropping

From: lynn@garlic.com (Lynn Wheeler)
Date: 03 Aug, 2010
Subject: Re: GSM eavesdropping
MailingList: Cryptography
On 8/2/2010 4:19 PM, Paul Wouters wrote:
"The default mode for any internet communication is encrypted"

the major use of SSL in the world today is hiding financial transaction information because the current paradigm is extremely vulnerable to form of replay-attack ... aka using information from previous transactions to perform fraudulent financial transactions.

One of the things done by the x9a10 financial standard working group was slightly tweak the paradigm with the x9.59 financial standard ... eliminating the replay-attack vulnerabilities .... and therefor the requirement to hide financial transaction details (as a countermeasure) ... and also eliminates the major requirement for SSL in the world today. The side-effect of x9.59 paradigm tweak is it eliminated the replay-attack vulnerability of transaction information regardless of where it exists (in flight on the internet, at rest in repositories, whereever)

the other analogy is the security acronym PAIN
• P -- privacy (sometimes CAIN, confidentiality)
• A -- authentication
• I -- integrity
• N -- non-repudiation


where x9.59 can be viewed as using strong Authentication and Integrity in lieu of privacy/confidentiality (required in current paradigm to hide information as countermeasure to replay-attack vulnerabilities)

--
virtualization experience starting Jan1968, online at home since Mar1970

Who is Really to Blame for the Financial Crisis?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 03 Aug, 2010
Subject: Who is Really to Blame for the Financial Crisis?
Blog: IBM co/ex workers
re:
https://www.garlic.com/~lynn/2010l.html#38 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010l.html#40 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010l.html#48 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010l.html#53 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010l.html#56 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010l.html#60 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010l.html#68 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010l.html#69 Who is Really to Blame for the Financial Crisis?

In the parallels with the 20s stock market speculation ... there are large numbers purely speculating in real estate ... involving non-owner occupied homes (non-owner occupied speculation with lots of flipping and turn-over churn contributed significantly to inflation/bubble runup) . Owner occupied homes are more analogous to some amount of the public involved in the 20s stock market bubble as collateral damage.

Major fuel for the whole thing was unregulated loan originators being able to pay the rating agencies for triple-A ratings on their (mortgage backed) toxic CDOs (roughly the role played by Brokers' loans in the 20s ... previous post referencing Pecora hearings). Toxic CDOs had been used in the S&L crisis w/o the effect as this time ... since w/o the triple-A rating ... there was limited market. Being able to pay for triple-A ratings, allowed the triple-A rated toxic CDOs to be sold to all the institutions and operations (including retirement and pension plans) that had mandates to only deal in triple-A rated "SAFE" instruments (as long as the bubble lasted, providing nearly unlimited funds for the unregulated loan originators).

There was also heavy fees & commissions all along the way ... providing significant financial motivation for individuals to play (and more than enough motivation to offset any possible concern the individuals might have regarding the risk to their institution, the economy, and/or the country).

Non-owner occupied speculation might see 2000% ROI (given inflation rates in some parts of the country, inflation further fueled by the speculation) using no-documentation, no-down, 1% interest-only payment ARMs ... flipping before the rates adjusted

--
virtualization experience starting Jan1968, online at home since Mar1970

Who is Really to Blame for the Financial Crisis?

Refed: **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 03 Aug, 2010
Subject: Who is Really to Blame for the Financial Crisis?
Blog: IBM co/ex workers
re:
https://www.garlic.com/~lynn/2010l.html#38 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010l.html#40 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010l.html#48 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010l.html#53 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010l.html#56 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010l.html#60 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010l.html#68 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010l.html#69 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010m.html#8 Who is Really to Blame for the Financial Crisis?

The big difference between the use of mortgage-backed securities (toxic CDOs) in the S&L crisis and the current financial mess was unregulated loan originators (regardless of their parent company) being able to pay the rating agencies for triple-A ratings (when both knew that the toxic CDOs weren't worth triple-A ratings ... from fall2008 congressional hearing testimony).

As a result, they effectively had unlimited source of funding for lending purposes and were able to unload the loans and eliminate all their risk (eliminating any motivation to care about loan quality or borrowers' qualifications). Recent suggestions are for loan originators to retain some interest (and therefor risk) in their loans (as a motivation to pay some attention to loan quality and/or borrowers qualifications). However, that doesn't address the major difference between the S&L crisis and currently ... the rating agencies willing to "sell" triple-A ratings for the toxic CDOs.

fall2008 congressional hearing testimony claimed that rating agency business processes were "mis-aligned". in theory, the ratings are "safety and soundness" interest/benefit for the buyers. However, the sellers are paying for the ratings ... which have interest in having the highest possible price from the largest possible market (based on the ratings) ... so the rating agencies were responding to the interests of the sellers ... and selling triple-A ratings.

There has been some amount written about things would be much better if business processes are correctly aligned and the parties are motivated to do the right thing ... compared to business processes being mis-aligned and the entities are motivated to do the wrong thing (which makes regulation enormously more difficult when lots of the players are constantly motivated to do the wrong thing).

--
virtualization experience starting Jan1968, online at home since Mar1970

History of Hard-coded Offsets

From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: History of Hard-coded Offsets
Newsgroups: bit.listserv.ibm-main
Date: 3 Aug 2010 12:21:05 -0700
shmuel+ibm-main@PATRIOT.NET (Shmuel Metz , Seymour J.) writes:
The 2301 and 2303 were drums; I don't recall the geometry.

2303 drum had slightly over 4k bytes on track.

2301 was almost a 2303 ... except it transferred data on four heads in parallel (i.e. same capacity, 1/4th the number of tracks with four times the capacity/track ... and four times the data transfer rate).

cp/67 used 2301 4k paging format borrowed from tss/360 that had nine 4k records formated on pair of tracks (with fifth record spanning track).

original cp67 install at the univ. in jan68 had fifo queuing for moveable arm devices and 2301 drum ... and single operation at a time execution. I modified the 2311/2414 disk support to add ordered arm seek queuing (which would about double 2314 effective thruput under heavy load) and did ordered, multiple page I/O chaining for the 2301 drum.

With single page i/o transfers on 2301 drum ... cp67 would saturate at about 80 page i/os per second. With chained-requests, I could get peaks approaching 300 page i/os per second (chained requests eliminated the avg. 1/2 rotational delay on every page transferred).

picture of 2301:
http://www.columbia.edu/cu/computinghistory/drum.html

another picture which seems to have gone 404 ... but lives on at wayback machine
https://web.archive.org/web/20030820180331/www.cs.ncl.ac.uk/events/anniversaries/40th/images/ibm360_672/slide12.html
and
https://web.archive.org/web/20060620002221/www.cs.ncl.ac.uk/events/anniversaries/40th/images/ibm360_672/slide12.jpg

picture of 360/67 machine room (w/2301)
https://web.archive.org/web/20030820174805/www.cs.ncl.ac.uk/events/anniversaries/40th/images/ibm360_672/29.html

other posts in this thread:
https://www.garlic.com/~lynn/2010l.html#33 History of Hard-coded Offsets
https://www.garlic.com/~lynn/2010l.html#76 History of Hard-coded Offsets
https://www.garlic.com/~lynn/2010m.html#1 History of Hard-coded Offsets
https://www.garlic.com/~lynn/2010m.html#4 History of Hard-coded Offsets

--
virtualization experience starting Jan1968, online at home since Mar1970

Idiotic programming style edicts

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Idiotic programming style edicts
Newsgroups: alt.folklore.computers
Date: Tue, 03 Aug 2010 15:35:50 -0400
for the fun of it ...

How New Technology Is Rewiring Our Brains
http://news.yahoo.com/s/pcworld/20100802/tc_pcworld/hownewtechnologyisrewiringourbrains
How New Technology Is Rewiring Our Brains
http://www.pcworld.com/article/202186/how_new_technology_is_rewiring_our_brains.html

--
virtualization experience starting Jan1968, online at home since Mar1970

U.S. operators take on credit cards with contactless payment trial

From: lynn@garlic.com (Lynn Wheeler)
Date: 03 Aug, 2010
Subject: U.S. operators take on credit cards with contactless payment trial
Blog: Payment Systems Network
In the 90s, there was a lot of hype that the telcos were going to take over the payment industry. Part of the scenario was that the telco backends (scaled to handle call records) were the only ones that could handle the anticipated micropayment volumes. Once they had dominance with micropayments ... they would use the position to take-over the rest of the payment industry. Some number of the telcos even built a fairly good size issuing customer base ... and you would see telco reps at various and sundry payment industry related meetings.

By the end of the 90s, all that had unraveled ... with the telcos having unloaded their issuing portfolios. One postmortem was that the telcos had been fairly use to customers skipping out on monthly bills w/o a good industry process for followup and collection. Part of this was that it was charges for infrastructure services ... but not actually major out-of-pocket money (sort of cost of doing business surcharge).

However, the credit card business resulted in real out-of-pocket money transfers to merchants ... in merchant settlement. When customers skipped out on payment of credit-card bills ... it was a much bigger hit to bottom line ... than skipping out on monthly service bill. Supposedly it was major culture shock to the telco players dabbling in the credit card issuing business.

And, of course, the micropayment volumes never did take off.

Since then some of the players in payment processing have installed backend platforms that were originally developed (and scaled) for the telco call record volumes.

earlier related posted item:

AT&T, Verizon to Target Visa, MasterCard With Phones
http://www.businessweek.com/news/2012-05-11/sec-madoff-report-called-into-question

--
virtualization experience starting Jan1968, online at home since Mar1970

Is the ATM still the banking industry's single greatest innovation?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 04 Aug, 2010
Subject: Is the ATM still the banking industry's single greatest innovation?
Blog: Payment Systems Network
re:
https://www.garlic.com/~lynn/2008p.html#27 Father Of Financial Dataprocessing

Jim Gray Tribute, UC Berkeley
https://web.archive.org/web/20080616153833/http://www.eecs.berkeley.edu/IPRO/JimGrayTribute/pressrelease.html

from above:
Gray is known for his groundbreaking work as a programmer, database expert and Microsoft engineer. Gray's work helped make possible such technologies as the cash machine, ecommerce, online ticketing, and deep databases like Google.

... snip ...

aka a lot of the database work in area like transactions and commits provided higher level of assurance in computer/electronic financial records (for auditors ... as opposed to requiring paper/hardcopy records).

when gray left for tandem ... he handed off some amount of consulting with financial institutions (like BofA) to me.

ATM wiki entry reference machines being designed at Los Gatos lab (even includes reference to some of my posts having worked at Los Gatos)
https://en.wikipedia.org/wiki/IBM_3624

And PIN verification done at the Los Gatos lab as part of ATM work
https://en.wikipedia.org/wiki/Personal_identification_number

and the early magstripe standards managed at the Los Gatos lab
https://en.wikipedia.org/wiki/Magnetic_stripe_card

smartcards were done in europe as "stored-value" cards (i.e. value was theoretically resident in the card) and were designed to be used for offline transactions ... compensating for the cost &/or lack of connectivity in much of the world at the time.

the equivalent market niche in the US became the magstripe stored-value, merchant, gift cards ... doing online transactions (because of greater connectivity and lower cost structure in the US).

we had been asked to design, size, scale, cost the backend dataprocessing infrastructure for possible entry of one of the major european smartcards into the US. When we were doing this ... we looked at the rest of the business & cost structure. Turns out that the smartcard operator was making significant amount of money off the float (value supposedly resident in the smartcards). Those smartcard programs pretty much disappeared with 1) significant improvement in connectivity and lowered telco costs ... changing trade-offs and 2) after major european central banks published directive that the operators would have to start paying interest on the smartcard stored value (eliminating their major financial motivation).

in the 70s & 80s, financial backend batch processes had online front-end financial transactions added (including ATM transactions) ... but the transactions still weren't actual finalized until the batch cobol in overnight batch. in the 90s, there was severe stress on the overnight batch window with increasing workload and globalization (decreasing length of overnight batch window). numerous financial institutions spent billions to re-engineer for straight-through processing (i.e. each transaction runs to completion eliminating the overnight batch window settlement/processing). The scenario was parallel processing on large number of "killer micros" ... which would offset the increased overhead involved in moving off batch. However, there was little upfront speeds&feeds work done ... so it wasn't until late in deployments that they discovered the technologies (they were using), had overhead inflation of 100 times (compared to cobol batch), totally swamping the anticipated thruput from large numbers of parallel killer micros.

The failures resulted in huge retrenchment and very risk adverse environment in the financial industry that still lingers on (contributed significantly to preserving major portion of existing big iron/mainframe market). A couple years ago there was attempts to interest the industry in brand-new real-time, straight-through transaction processing technology ... which only had 3-5 times the overhead of batch cobol (easily within the price/performance of parallel "killer micros") ... but the spectre of the failures in the 90s was still casting a dark shadow on the industry.

--
virtualization experience starting Jan1968, online at home since Mar1970

Facebook doubles the size of its first data center

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Facebook doubles the size of its first data center
Newsgroups: alt.folklore.computers
Date: Wed, 04 Aug 2010 10:12:57 -0400
Facebook doubles the size of its first data center
http://www.computerworld.com/s/article/9180119/Facebook_doubles_the_size_of_its_first_data_center

from above:
Facebook has countered that it picked Oregon because of its dry and temperate climate. That allows it to use a technique called evaporative cooling to keep its servers cool, instead of a heavy mechanical chiller. Facebook says the data center will be one of the most energy-efficient in the world.

... snip ...

backside of western mountain ranges tend to have "rain shadow" ... semi-arid or arid regions where coastal winds have dropped moisture on the western slopes. See that on eastern slopes of the cascades, rockies, sierras; see it to lesser extent on the back side of the (lower) santa cruz mountains.

Several of the mega-datacenters have gone into the washington/oregon region ... also has large amounts of water (just not in the air) and hydro-electric power.

past mega-datacenters & hydro-electric power posts:
https://www.garlic.com/~lynn/2002n.html#43 VR vs. Portable Computing
https://www.garlic.com/~lynn/2003d.html#32 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2006h.html#18 The Pankian Metaphor
https://www.garlic.com/~lynn/2007o.html#14 Geothermal was: VLIW pre-history
https://www.garlic.com/~lynn/2008d.html#72 Price of CPU seconds
https://www.garlic.com/~lynn/2008n.html#62 VMware Chief Says the OS Is History
https://www.garlic.com/~lynn/2008n.html#68 VMware Chief Says the OS Is History
https://www.garlic.com/~lynn/2008n.html#79 Google Data Centers 'The Most Efficient In The World'
https://www.garlic.com/~lynn/2008n.html#83 Sea level
https://www.garlic.com/~lynn/2008r.html#56 IBM drops Power7 drain in 'Blue Waters'
https://www.garlic.com/~lynn/2009m.html#90 A Faster Way to the Cloud
https://www.garlic.com/~lynn/2010c.html#78 SLIGHTLY OT - Home Computer of the Future (not IBM)
https://www.garlic.com/~lynn/2010e.html#78 Entry point for a Mainframe?
https://www.garlic.com/~lynn/2010j.html#13 A "portable" hard disk
https://www.garlic.com/~lynn/2010j.html#27 A "portable" hard disk
https://www.garlic.com/~lynn/2010k.html#62 taking down the machine - z9 series
https://www.garlic.com/~lynn/2010l.html#51 Mainframe Hacking -- Fact or Fiction

--
virtualization experience starting Jan1968, online at home since Mar1970

History of Hard-coded Offsets

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: History of Hard-coded Offsets
Newsgroups: bit.listserv.ibm-main
Date: 4 Aug 2010 11:34:32 -0700
rfochtman@YNC.NET (Rick Fochtman) writes:
t NCSS we devised a scheme to use 2305 devices for paging. We figured 3 pages per track and we inserted a "gap record" between the pages. Thus we were able to fetch all three pages, from three different exposures, in a single revolution of the device. Ditto for writing a page as well. A guy named Grant Tegtmeier was the "mover and shaker" behind this scheme, as well as some other DASD modifications that also made huge differences in overall performance. Last I knew, he was out in Silicon Valley and I'd sure like to contact him again, for old times' sake.

re:
https://www.garlic.com/~lynn/2010m.html#10 History of Hard-code Offsets

use of gap records were standard on 2305 (2305 track had more than enough room for the dummy records) and (sort-of) on 3330.

2305 had multiple exposures and it was also possible to dedicate a specific exposure to all requests for record at specific rotational position ... eliminating chained requests having to process a (chained) seek head CCW in the rotational latency between the end of one record and the start of the following record (small dummy records were used to increase the rotational latency between the end of the preceeding page record and the start of the next page record ... allowing time for the processing of the chained seek head). In any case, chained requests amortized the overhead of i/o initiation and interrupt processing across multiple page transfers ... while startio/interrupt per request (using multiple exposures) could improve responsiveness (at the cost trade-off of more overhead).

the dynamic adatpive resource manager (sometimes called the fairshare scheduler because default resource policy was fairshare), page replacement algorithms, request chaining (for 2301 & 2314) and ordered seek (for 2314) that I did as undergraduate was picked up and relased in cp67. in the morph from cp67 to vm370 ... a lot of that stuff got dropped. SHARE was lobbying that I be allowed to put a bunch of the stuff back into vm370.

With the failure of future system ... most internal groups had been distracted ... allowing 370 software & hardware product pipelines to go dry ... there was a mad rush to get stuff back into the 370 product pipeline. misc. past posts mentioning future system
https://www.garlic.com/~lynn/submain.html#futuresys

during that period I continued to do 360 & 370 work ... even sometimes making snide remarks about the practicality of FS. One of my hobbies was doing product distribution & support for internal systems ... at one point, peaking over hundred with csc/vm ... some old email
https://www.garlic.com/~lynn/2006v.html#email731212
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

in any case, the mad rush to get stuff back into the 370 product pipeline ... tipped the scales allowing bits & pieces of stuff to be released, that I had been doing ... including the "resource manager" (had a whole lot more stuff than straight dynamic adaptive resource manager, also was the guinee pig for starting to charge for kernel software). misc. past posts mentioning resource manager
https://www.garlic.com/~lynn/subtopic.html#fairshare
misc. past posts mentioning paging & virtual memory
https://www.garlic.com/~lynn/subtopic.html#wsclock

the above old email references doesn't mention misc. other stuff I had done ... like multiprocessor support or for the microcode assist stuff that I worked on for endicott. some m'code related posts
https://www.garlic.com/~lynn/submain.html#mcode
some old posts mentioning one of the multiprocessor efforts
https://www.garlic.com/~lynn/submain.html#bounce

there was some issues with whether 2305s could really operate on 158s (because of integrated channel overhead) at the standard specified channel cable distances. had some number of poorly performing 158s where turns out that 2305s were not doing three transfer per rotation ... but taking additional rotations. Things would improve when channel lengths were shortened.

The big problem in this area was 3330 ... 3330 track didn't officially allow for big enough dummy record between three 4k records (to allow for seek head to be inserted to switch track between end of one page and the start of the next).

again the real problem was with 158 and latency/overhead in the integrated channel processing. I did a whole series of tests across a number of different processors (148, 4341, 158, 168, 303x, & some clone processorsetc), 3330 controller vendors (not just IBM), and block sizes (looking for threshold where seek head could be processed within the rotational latency for specific block size ... i.e. start with smallest possible dummy block ... perform the rotational transfer tests, then increase size ... looking for minimum dummy block size that could transfer three pages ... all on different tracks ... in one rotation).

most of the clone 3330 disk controllers were faster (required smaller dummy block size) than standard 380. The 148, 4341, and 168 ... were all much better than the 158. All the 303x processors exhibited the same characteristic as the 158 ... since the channel director used for all 303x processors was a 158 engine with just the integrated channel microcode (and w/o the 370 microcode). I still even have the assembler program some place that I used for all the test.

a few older posts discussing the 3330 dummy-block tests (for head-switch chained transfers in single rotation ... on the same cylinder):
https://www.garlic.com/~lynn/2000d.html#7 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2001j.html#3 YKYGOW...
https://www.garlic.com/~lynn/2002b.html#17 index searching
https://www.garlic.com/~lynn/2004d.html#65 System/360 40 years old today
https://www.garlic.com/~lynn/2006r.html#40 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006t.html#19 old vm370 mitre benchmark
https://www.garlic.com/~lynn/2008s.html#52 Computer History Museum
https://www.garlic.com/~lynn/2009p.html#12 Secret Service plans IT reboot
https://www.garlic.com/~lynn/2010e.html#36 What was old is new again (water chilled)

--
virtualization experience starting Jan1968, online at home since Mar1970

Region Size - Step or Jobcard

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Region Size - Step or Jobcard
Newsgroups: bit.listserv.ibm-main
Date: 4 Aug 2010 13:13:28 -0700
m42tom-ibmmain@YAHOO.COM (Tom Marchant) writes:
More precisely, MVT had a single address space

aka VS2/SVS was minimally modified MVT in a single (16mbyte) virtual address space; biggest change was borrowing ccwtrans from cp67 for EXCP ... to take the application-passed channel program and make a copy of it ... substituting real addresses for virtual addresses.

there was a specially modified MVT os/360 relase 13 done at boeing huntsville. MVT storage/memory management became heavily fragmented with long running jobs. boeing huntsville had a pair of 360/67s for long running cad/cam jobs with 2250 vector graphics under MVT OS/360 release 13. MVT release 13 was modified to use the 360/67 virtual memory hardware to reoganize storage/memory locations (compensating for enormous storage fragmentation). there was no paging going on ... just address translation.

the os/360 single address space contributed heavily to pervasive pointer-passing paradigm (also efficiency in limited storage/memory environments). this caused all sorts of problems attempting to deploy MVS with multiple virtual address spaces.

MVS kernel image was made half of each 16mbyte application virtual address space (to simplify pervasive use of pointer-passing APIs).

The problem manifested itself when all the subsystems were also moved into their own separate, individual, virtual address spaces ... now the pointer-passing API between applications and subsystems started to break down. The solution was the "common segment" ... a part of every virtual address space that could have dedicated areas for moving parameters into so as to not break the pointer passing API paradigm. The problem was that the demand for common segment area grew as the size of systems and number of subsystems grew. Some large MVS shops were even facing the possibility of moving from 5mbyte common segment to 6mbyte common segment ... reducing maximum application area to 2mbytes.

Burlington ... internal chip foundary and big internal MVS shop had an enormous problem ... with major fortran applications that were constantly threatening to exceed 7mbytes and lots of carefully crafted MVS systems that maintained the common segment at 1mbyte. Move off MVS to CMS would have eliminated a whole lot of effort that was constantly going into keeping their applications working in the MVS environment ... but that would have been an enormous blow to MVS prestige and image.

--
virtualization experience starting Jan1968, online at home since Mar1970

Age

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Age
Newsgroups: alt.folklore.computers
Date: Wed, 04 Aug 2010 18:24:49 -0400
Walter Bushell <proto@panix.com> writes:
Heinlein did, for one. Just like our recent economic collapse, anyone interested should have seen it coming. However, there was no money in forecasting either and lots of money in remaining oblivious.

... lots of money in playing ball.

couple years ago there was interview with successful person on wallstreet ... saying that the large successful operations have been gaming the system for years ... and considered it had little or no downside, something about the people at SEC are so ignorant that they would never be able to figure out what was going on.

one might make some parallel with successful parasites ... that have to know how to maximize the blood they can suck w/o severely impacting the host (however, they can periodically get carried away).

--
virtualization experience starting Jan1968, online at home since Mar1970

Region Size - Step or Jobcard

From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Region Size - Step or Jobcard
Newsgroups: bit.listserv.ibm-main
Date: 5 Aug 2010 06:09:20 -0700
re:
https://www.garlic.com/~lynn/2010m.html#16 Region Size - Step or Jobcard

i.e. MVS was still mostly the same address space ... or every MVS virtual address space was mostly the same (some installations, 13mbytes & threatening to become 14mbytes, of every 16mbyte virtual address space) ... in large part because of the ingrained, pervasive pointer-passing API paradigm.

--
virtualization experience starting Jan1968, online at home since Mar1970

AT&T, Verizon to Target Visa, MasterCard With Phones

From: lynn@garlic.com (Lynn Wheeler)
Date: 05 Aug, 2010
Subject: AT&T, Verizon to Target Visa, MasterCard With Phones
Blog: Payment Systems Network
AT&T, Verizon to Target Visa, MasterCard With Phones
http://www.businessweek.com/news/2012-05-11/sec-madoff-report-called-into-question

Merchant Resistance Could Hobble the Carrier-Led NFC Venture
http://www.digitaltransactions.net/newsstory.cfm?newsid=2599

Smartphones as credit cards: Possibly dangerous, definitely inevitable; Regular smartphone users will need to take security precautions on par with enterprise users
http://www.networkworld.com/news/2010/080610-zeus-malware-used-pilfered-digital.html

...

We had been asked to come in and consult with a small client/server startup that wanted to do payment transactions on their server; the startup had also invented this technology called "SSL" they wanted to use; the result is now sometimes called "electronic commerce".

Somewhat as a result, in the mid-90s, we were invited to participate in the x9a10 financial standard working group which had been given the requirement to preserve the integrity of the financial infrastructure for ALL retail payments (i.e. ALL, credit, debit, stored-value, point-of-sale, face-to-face, unattended, remote, internet, wireless, contact, contactless, high-value, low-value, transit turnstile, aka ALL). The result was x9.59 financial standard payment protocol. One of the things done in the x9.59 financial standard was slightly tweak the current paradigm to eliminate risk of using information from previous transactions (like account numbers) for a form of "replay-attack" ... i.e. performing fraudulent financial transactions. This didn't eliminate skimming, evesdropping, data breaches, etc ... it just eliminated the risk from such events ... and the possibility of fraudulent financial transactions as a result.

Now, the major use of SSL in the world today ... is this earlier work we had done for payment transactions (frequently called "electronic commerce") ... to "hide" transaction detail/information. With x9.59, it is no longer necessary to hide such information (as a countermeasure to fraudulent financial transactions) ... so X9.59 also eliminates the major use of "SSL" in the world today.

--
virtualization experience starting Jan1968, online at home since Mar1970

Age

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Age
Newsgroups: alt.folklore.computers
Date: Thu, 05 Aug 2010 17:37:16 -0400
greymausg writes:
Amazingly enough, from memory, the population almost doubled during that time, so there is something that the Vietnamese are better at than war.

not necessarily ... there are been instances of big population jumps in various parts of the world after introduction of newer medicines and medical techniques (doesn't require change in birth rates ... just reduction in mortality with corresponding increase in avg life expectency). in other places, this has sometimes resulted in subsequent big mortality spikes from starvation, when population explosion (from the introduction of modern medicine) gets out-of-kilter with what the environment is otherwise able to support.

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Hall of Fame (MHOF)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 05 Aug, 2010
Subject: Mainframe Hall of Fame (MHOF)
Blog: MainframeZone
from intro:
Mainframe Hall of Fame (MHOF): The MHOF has 25 members ... see
http://bit.ly/cy7e31

Who else has had an important role in the success of the IBM mainframe and should be added to the MHOF?


... snip ...

decade old post with two decade old email distribution (from Beausoleil) regarding approval of A74 for product release
https://www.garlic.com/~lynn/2000e.html#email880622

recent blog reference to advent of the science center (reference from CSC Alumni linkedin group)
http://smartphonestechnologyandbusinessapps.blogspot.com/2010/05/bob-creasy-invented-virtual-machines-on.html

some amount on System/R at System/R reunion website
http://www.mcjones.org/System_R/

original relational/SQL ... done on 370/145 vm370 system in bldg. 28

Note that the first shipped relational dbms was from the Multics group ... which was located on 5th flr of 545 tech sq (science center, virtual machines, internal network, GML, etc ... was on 4th flr of 545 tech sq).

also for System/R ... and whole bunch of other stuff, Jim Gray ... tribute webpage:
https://web.archive.org/web/20080616153833/http://www.eecs.berkeley.edu/IPRO/JimGrayTribute/pressrelease.html

--
virtualization experience starting Jan1968, online at home since Mar1970

CSC History

Refed: **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 06 Aug, 2010
Subject: CSC History
Blog: Cambridge Scientific Center Alumni
re:
https://www.garlic.com/~lynn/2010l.html#74 CSC History
https://www.garlic.com/~lynn/2010l.html#84 CSC History

picture of globe given out for the 1000th vnet node
https://www.garlic.com/~lynn/vnet1000.jpg

1000th node globe

old posts with sample of VNET node announcements (for 1983) .... including announcement for 1000th node
https://www.garlic.com/~lynn/99.html#112

internal network was larger than arpanet/internet from just about the beginning until possibly late 85 or early 86 ... some past posts mentioning internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

old post with list of corporate sites that had one or more new VNET nodes added sometime during 1983
https://www.garlic.com/~lynn/2006k.html#8

--
virtualization experience starting Jan1968, online at home since Mar1970

A mighty fortress is our PKI, Part II

Refed: **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 06 Aug, 2010
Subject: Re:  A mighty fortress is our PKI, Part II
MailingList: Cryptography

https://www.garlic.com/~lynn/2010l.html#63 A mighty fortress is our PKI, Part II
https://www.garlic.com/~lynn/2010l.html#64 A mighty fortress is our PKI, Part II
https://www.garlic.com/~lynn/2010l.html#67 A mighty fortress is our PKI, Part II
https://www.garlic.com/~lynn/2010l.html#73 A mighty fortress is our PKI, Part II

Kaspersky: Sham Certificates Pose Big Problem for Windows Security
http://www.ecommercetimes.com/story/70553.html

from above ..

Windows fails to clearly indicate when digital security certificates have been tampered with, according to Kaspersky Lab's Roel Schouwenberg, and that opens a door for malware makers.

... snip ...

Zeus malware used pilfered digital certificate
http://www.computerworld.com/s/article/9180259/Zeus_malware_used_pilfered_digital_certificate
Zeus Malware Used Pilfered Digital Certificate
http://www.pcworld.com/businesscenter/article/202720/zeus_malware_used_pilfered_digital_certificate.html

&

Zeus malware used pilfered digital certificate
http://www.networkworld.com/news/2010/081010-bank-botnet-zeus.html

from above:

The version of Zeus detected by Trend Micro had a digital certificate belonging to Kaspersky's Zbot product, which is designed to remove Zeus. The certificate -- which is verified during a software installation to ensure a program is what it purports to be -- was expired, however.

... snip ...

Certificate Snatching -- ZeuS Copies Kaspersky's Digital Signature
http://blog.trendmicro.com/certificate-snatching-zeus-copies-kasperskys-digital-signature/

...

there was another scenario of certificate-copying (& dual-use vulnerability) discussed in this group a while ago. The PKI/certificate bloated payment specification had floated the idea that when payment was done with their protocol, dispute burden-of-proof would be switched & placed on the consumer (from the current situation where burden-of-proof is on the merchant/institution; this would be a hit to "REG-E" ... and also apparently what has happened in the UK with the hardware token point-of-sale deployment).

However, supposedly for this to be active, the payment transaction needed a consumer appended digital certificate that indicated they were accepting dispute burden-of-proof. The issue was whether the merchant could reference some public repository and replace the digital certificate appended by the consumer ... with some other digital certificate for the same public key (possibly digital certificate actually obtained by the consumer for that public key at some time in the past ... or an erroneous digital certificate produced by a sloppy Certification Authority that didn't adequately perform check for applicant's possession of the corresponding private key).

Of course, since the heavily bloated PKI/certificate payment specification, performed all PKI-ops at the internet boundary ... and then passed a normal payment transaction with just a flag claiming that PKI-checking had passed ... they might not need to even go that far. There was already stats on payment transactions coming thru with the flag on ... and they could prove no corresponding PKI-checking had actually occurred. With the burden-of-proof on consumer ... the merchant might not even have to produce evidence that the appended digital certificates had been switched.

--
virtualization experience starting Jan1968, online at home since Mar1970

Little-Noted, Prepaid Rules Would Cover Non-Banks As Wells As Banks

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 06 Aug, 2010
Subject: Little-Noted, Prepaid Rules Would Cover Non-Banks As Wells As Banks
Blog: Payment Systems Network
Little-Noted, Prepaid Rules Would Cover Non-Banks As Wells As Banks
http://www.digitaltransactions.net/newsstory.cfm?newsid=2604

from above:
FinCen is addressing concerns that because prepaid cards and similar devices can be easily obtained and are anonymous, they are attractive for money laundering, terrorist financing, and other illegal activities. Under the rules, providers of prepaid access would be required to meet the same registration, suspicious-activity reporting, customer-information recordkeeping, and new transactional record keeping requirements as banks.

... snip ...

Recently there were articles about too-big-to-fail financial institutions participating in illegal drug money laundering; discovered when money trail (involved in buying planes for drug smuggling) was followed. Apparently since the gov. was already doing everything possible to keep the institutions afloat, rather than prosecuting, sending the executives in jail, and shutting down the institutions ... they just asked that they promise to stop doing it.

recent references
https://www.garlic.com/~lynn/2010l.html#60 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010l.html#65 Federal Reserve

some of the news articles:

Too Big to Jail - How Big Banks Are Turning Mexico Into Colombia
https://web.archive.org/web/20100808141220/http://www.taipanpublishinggroup.com/tpg/taipan-daily/taipan-daily-080410.html
Banks Financing Mexico Gangs Admitted in Wells Fargo Deal
https://www.bloomberg.com/news/articles/2010-06-29/banks-financing-mexico-s-drug-cartels-admitted-in-wells-fargo-s-u-s-deal
Wall Street Is Laundering Drug Money And Getting Away With It
http://www.huffingtonpost.com/zach-carter/megabanks-are-laundering_b_645885.html?show_comment_id=53702542
Banks Financing Mexico Drug Gangs Admitted in Wells Fargo Deal
https://web.archive.org/web/20100701122035/http://www.sfgate.com/cgi-bin/article.cgi?f=/g/a/2010/06/28/bloomberg1376-L4QPS90UQVI901-6UNA840IM91QJGPBLBFL79TRP1.DTL
How banks aided drug traffic
http://www.charlotteobserver.com/2010/07/04/1542567/how-banks-aided-drug-traffic.html
The Banksters Laundered Mexican Cartel Drug Money
http://www.economicpopulist.org/content/banksters-laundered-mexican-cartel-drug-money
Money Laundering and the Global Drug Trade are Fueled by the Capitalist Elites
http://www.globalresearch.ca/index.php?context=va&aid=20210
Wall Street Is Laundering Drug Money and Getting Away with It
http://www.alternet.org/economy/147564/wall_street_is_laundering_drug_money_and_getting_away_with_it/
Money Laundering and the Global Drug Trade are Fueled by the Capitalist Elites
http://dandelionsalad.wordpress.com/2010/07/23/money-laundering-and-the-global-drug-trade-are-fueled-by-the-capitalist-elites-by-tom-burghardt/
Global Illicit Drugs Trade and the Financial Elite
http://www.pacificfreepress.com/news/1/6650-global-illicit-drugs-trade-and-the-financial-elite.html
Wall Street Is Laundering Drug Money And Getting Away With It
http://institute.ourfuture.org/blog-entry/2010072814/megabanks-are-laundering-drug-money-and-getting-away-it

--
virtualization experience starting Jan1968, online at home since Mar1970

Idiotic programming style edicts

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Idiotic programming style edicts
Newsgroups: alt.folklore.computers
Date: Sat, 07 Aug 2010 10:40:05 -0400
Huge <Huge@nowhere.much.invalid> writes:
I've been in IT(*) since 1975, and I'm watching the "centralised/distributed" changeover coming round for the third time.

43xx & vax sold into the mid-range market about in equal numbers, big difference for 43xx was large commercial customers putting orders in for multiple hundreds of 43xx machines. The mid-range machines opened up new frontiers with price/performance and being able to put them into (converted) conference rooms and departmental supply rooms. some number of large datacenters were bursting at the seams and didn't have the floor space for adding new stuff ... and adding datacenter space was major undertaking & expense).

various old 43xx emails ... some discussing reaching datacenter limits for traditional mainframes ... and using 43xx machines for expansion ... putting them out in local environment with much less upfront infrastructure expense (required for tradtional datacenter mainframes).
https://www.garlic.com/~lynn/lhwemail.html#4341

specific old email about AFDS starting out looking at 20 4341, which turned into 210 4341s
https://www.garlic.com/~lynn/2001m.html#email790404b

in this post:
https://www.garlic.com/~lynn/2001m.html#15 departmental servers

above has some references to highend, "big iron" possibly fealing heat from mid-range starting to take some of their business ... and some resulting internal politics; aka get five 4341s for lower aggregate price of 3033 and get more aggregate MIPs, more aggregate real storage, more aggregate i/o capacity (and not require the heavy datacenter investment).

... later scenario was cluster scale-up for ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

... subject of this jan92 meeting mentioned in this old post
https://www.garlic.com/~lynn/95.html#13

and some old email about cluster scale-up
https://www.garlic.com/~lynn/lhwemail.html#medusa

this particular was possibly just hrs before the project was transferred and we were told that we weren't to work on anything with more than four processors
https://www.garlic.com/~lynn/2006x.html#email920129

major effort in Medusa was increasing the number of "commoditity" processors that could be put into single rack (and interconnecting large numbers of racks) ... and the physical issues with handling the heat; significantly increasing computing per sq/ft and scaling it up. This had been a continuing theme ever since ... morphing into GRID computing and now CLOUD computing.

within a couple weeks of the project being transferred, it was announced as "supercomputer" ... misc press items
https://www.garlic.com/~lynn/2001n.html#6000clusters1 17Feb92
and
https://www.garlic.com/~lynn/2001n.html#6000clusters2 11May92

--
virtualization experience starting Jan1968, online at home since Mar1970

CSC History

Refed: **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 07 Aug, 2010
Subject: CSC History
Blog: Cambridge Scientific Center Alumni
re:
https://www.garlic.com/~lynn/2010l.html#74 CSC History
https://www.garlic.com/~lynn/2010l.html#84 CSC History
https://www.garlic.com/~lynn/2010m.html#22 CSC History

there were also posters and other stuff ... but didn't survive the years ... just the globe.

collection of various old pictures
https://www.garlic.com/~lynn/lhwemail.html#oldpicts

above includes a 2741 APL ball that I used at CSC ... as well as various pictures of terminal at home (and home tieline) ... although I haven't turned up picutre of 2741 I had at home (while at CSC).

another image from '83 is cover page from boyd presentation (I had sponsored Boyd's presentations at IBM)
https://www.garlic.com/~lynn/organic.jpg

Boyd organic c&c cover

old posts mentioning Boyd ... along with various URLs from around the web
https://www.garlic.com/~lynn/subboyd.html

while the 1000th node didn't appear until jun ... stuff was being prepared earlier ... old email about getting samples back in april
https://www.garlic.com/~lynn/2006k.html#email830422

above lists 8.5x11 poster for insert into notepad binder, 11x17 wall poster, 3x5 paperweight with reduced version of poster in lucite and the globe ornament.

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Hall of Fame (MHOF)

From: lynn@garlic.com (Lynn Wheeler)
Date: 07 Aug, 2010
Subject: Mainframe Hall of Fame (MHOF)
Blog: MainframeZone
re:
https://www.garlic.com/~lynn/2010m.html#21 Mainframe Hall of Fame (MHOF)

Norm Rasmussen

past post from 2004 about science center, 40yrs, feb. 1964 ... misc stuff taken from melinda's paper (slightly predates 360 announce)
https://www.garlic.com/~lynn/2004c.html#11

science center, virtual machines, internal network (technology also used for bitnet & earn), lots of online/interactive stuff, invention of GML (later morphed into SGML, HTML, XML, etc), lots of performance stuff (eventually evolving into capacity planning)

--
virtualization experience starting Jan1968, online at home since Mar1970

CSC History

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 07 Aug, 2010
Subject: CSC History
Blog: Cambridge Scientific Center Alumni
re:
https://www.garlic.com/~lynn/2010l.html#74 CSC History
https://www.garlic.com/~lynn/2010l.html#84 CSC History
https://www.garlic.com/~lynn/2010m.html#22 CSC History
https://www.garlic.com/~lynn/2010m.html#26 CSC History

for other topic drift ... this old item mentions VNET PRPQ shipping with full SPM support
https://www.garlic.com/~lynn/2006k.html#email851017

the above mentions my getting "first SPM release" (for vm370) in 17Aug75 ... I had actually gotten earlier version and included it in CSC/VM distribution ... some old email converting from cp67 to vm370 ... (and producing CSC/VM distribution)
https://www.garlic.com/~lynn/2006v.html#email731212
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

The '85 "SPM" email was about NIH & product group going thru some number of iterations producing a series of subsets of SPM (iucv, smsg, etc) ... when they could have shipped the full SPM superset at the outset.

--
virtualization experience starting Jan1968, online at home since Mar1970

Are we spending too little on security? Or are we spending too much??

Refed: **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 07 Aug, 2010
Subject: Are we spending too little on security? Or are we spending too much??
Blog: Financial Cryptography
Are we spending too little on security? Or are we spending too much??
http://financialcryptography.com/mt/archives/001259.html

recent post about security proportional to risk ... merchants interest in the transaction (information) is proportional to profit ... possibly a couple dollars ... and processors interest in the transaction (information) is possibly a couple cents ... while the risk to the consumer (and what the crooks are after) is the credit limit &/or account balance ... as a result the crooks may be able to outspend (attacking the system) the merchant/processors (defending the system) by a factor of 100 times.
https://www.garlic.com/~lynn/2010l.html#70

x9.59 addressed the security proportional to risk aspect ... but it also tweaked the paradigm so the consumer information at the merchants & processors wasn't at risk ... aka the current paradigm is trying to create motivation for the merchants and processors to protect consumer information (where most security infrastructures have entities protecting their own assets ... it gets harder to motivate entities to protect the assets of others)
https://www.garlic.com/~lynn/2010m.html#6
and
https://www.garlic.com/~lynn/2010l.html#79
https://www.garlic.com/~lynn/2010m.html#2

--
virtualization experience starting Jan1968, online at home since Mar1970

AT&T, Verizon to Target Visa, MasterCard With Phones

From: lynn@garlic.com (Lynn Wheeler)
Date: 07 Aug, 2010
Subject: AT&T, Verizon to Target Visa, MasterCard With Phones
Blog: Payment Systems Network
re:
https://www.garlic.com/~lynn/2010m.html#19 AT&T, Verizon to Target Visa, MasterCard With Phones

Big part of the current paradigm involving SSL, PCI, data breaches, skimming, etc .... is that information from previous transactions can be used by crooks for fraudulent financial transactions ... basically a form of replay attack or replay vulnerability.

The X9.59 financial transaction standard slightly tweaked the existing paradigm to eliminate the replay attack vulnerability ... which eliminated having to hide account numbers and/or hide information from previous transactions ... which eliminated the threat from data breaches, skimming, and/or other forms related to criminals harvesting such information for the purposes of performing fraudulent financial transactions.

--
virtualization experience starting Jan1968, online at home since Mar1970

Are we spending too little on security? Or are we spending too much??

From: lynn@garlic.com (Lynn Wheeler)
Date: 08 Aug, 2010
Subject: Are we spending too little on security? Or are we spending too much??
Blog: Financial Cryptography
re:
https://www.garlic.com/~lynn/2010m.html#29 Are we spending too little on security? Or are we spending too much??

some of this goes back to "Naked Transaction Metaphor" ... several old posts ...
https://www.garlic.com/~lynn/subintegrity.html#payments

and related here:

https://financialcryptography.com/mt/archives/000745.html Naked Payments I - New ISO standard for payments security - the Emperor's new clothes?
https://financialcryptography.com/mt/archives/000744.html Naked Payments II - uncovering alternates, merchants v. issuers, Brits bungle the risk, and just what are MBAs good for?
https://financialcryptography.com/mt/archives/000747.html Naked Payments III - the well-dressed bank
https://financialcryptography.com/mt/archives/000749.html Naked Payments IV - let's all go naked

besides the issue of motivating institutions to "protect" vulnerable consumer information ... there is a lot of difficulty (enormous expense and lots of cracks) with attempting to prevent misuse of (vulnerable) consumer/transaction information that is widely distributed and required in large number of business processes. The x9.59 assertion is rather than attempting to plug the millions of possible leaks (to prevent the information from falling into the hands of crooks), it is much more effective to tweak the paradigm and eliminate the crooks being able to use the information for fraudulent transactions

--
virtualization experience starting Jan1968, online at home since Mar1970

Idiotic programming style edicts

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Idiotic programming style edicts
Newsgroups: alt.folklore.computers
Date: Sun, 08 Aug 2010 12:42:18 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
43xx & vax sold into the mid-range market about in equal numbers, big difference for 43xx was large commercial customers putting orders in for multiple hundreds of 43xx machines. The mid-range machines opened up new frontiers with price/performance and being able to put them into (converted) conference rooms and departmental supply rooms. some number of large datacenters were bursting at the seams and didn't have the floor space for adding new stuff ... and adding datacenter space was major undertaking & expense).

re:
https://www.garlic.com/~lynn/2010m.html#25 Idiotic programming style edicts

past post with decade of vax numbers, sliced and diced by year, model, us/non-us, etc ... by the mid-80s ... much of the mid-range market was starting to give way to workstations and large PCs.
https://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction

--
virtualization experience starting Jan1968, online at home since Mar1970

What will Microsoft use its ARM license for?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What will Microsoft use its ARM license for?
Newsgroups: comp.arch
Date: Mon, 09 Aug 2010 11:57:03 -0400
Andy Glew <"newsgroup at comp-arch.net"> writes:
One of the original motivations of RISC was that a regular, orthogonal, instruction set might be easier for compilers to deal with.

Now the line goes that a compiler can deal with an irregular, non-orthogonal, instruction set.


the other scenario ... I've periodically claimed that John's motivation was to go to the opposite complexity extreme of the (failed) future system effort for 801 in the mid-70s ... not only simplifying instruction set ... but also making various hardware/compiler/softare complexity trade-offs ... decreasing hardware complexity and compensating with more sophisticated compilers and softare.

another example was lack of hardware protection (reduced hardware complexity) compensated for by compiler that only generated correct code ... and closed operating system that would only load correct programs.

this was the displaywriter follow-on from the early 80s with romp (chip), pl.8 (compiler) and cp.r (operating system). when that product got killed, the group looked around for another market for the box and hit on the unix workstation market. they got the company that did the port to the pc for pc/ix ... to do one for their box ... and marketed it as aix & pc/rt. one issue was that the unix & c environment is significantly different than "only correct programs" and "closed operating system" from the original design (requiring at least some additions to the hardware for the different paradigm/environment).

misc. past posts mentioning 801, iliad, risc, romp, rios, power, power/pc, etc
https://www.garlic.com/~lynn/subtopic.html#801

misc. past posts mentioning (failed) future system effort
https://www.garlic.com/~lynn/submain.html#futuresys

... trivia ... predating romp was effort to replace the large variety of internal microprocessors (used in controllers and for low/mid range processor engines) with 801 ... some number of 801 Iliad chips configured for that purpose.

an example was the original as/400 (replacing the s/38) was going to be 801 iliad chip ... but when that ran into trouble ... a custom cisc chip was quickly produced for the product. as/400 did finally move off cisc to 801 power/pc varient a decade or so later.

--
virtualization experience starting Jan1968, online at home since Mar1970

Hardware TLB reloaders

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Hardware TLB reloaders
Newsgroups: comp.arch
Date: Mon, 09 Aug 2010 12:27:33 -0400
rpw3@rpw3.org (Rob Warnock) writes:
ISTR that the reason the 370 had (at least) 4-way associative TLBs was that there were certain instructions that could not make forward progress unless there were *eight* pages mapped simultaneously, of which up to four could collide in the TLB. The famous example of such was the Translate-And-Test instruction in the situation in which the instruction itself, the source buffer, the destination buffer, and the translation table *all* spanned a page boundary, which obviously needs valid mappings at least eight pages. [But only four could collide per TLB line.]

minor trivia ... translate, transate-and-test were against the source buffer ... using the translation table/buffer (which could cross page boundary). the two additional possible page references was that instead of executing the instruction directly ... the instruction could be the target of an "EXECUTE" instruction; where the 4-byte EXECUTE instruction might also cross page boundary

the feature of the execute instruction was that it would take a byte from a register and use it to modify the 2nd byte of the target instruction for execution ... which in SS/6-byte instructions was the length field; eliminating some of the reasons for altering instructions as they appeared in storage).

note that 360/67 had an 8-entry associative array as the translate look-aside hardware ... in order to handle the worst case instruction (eight) page requirement.

more trivia ... in more recent processors ... translate & translate and test have gotten a "bug" fix and became much more complex.

360 instructions always pretested both the origin of an operand and the end of an operand for valid (in the case of variable length operand specification ... used the instruction operand length field ... or in case of the execute instruction, the length supplied from register) ... before beginning execution ... in the above example might have multiple page faults before the instruction execution would actually start.

370 introduced a couple instructions that would execute incrementally (MVCL & CLCL) ... although there were some early machines that had microcode implementation bugs ... that would pretest the end of MVCL/CLCL operand before starting execution.

relatively recently a hardware fix was accepted for translate & translate & test instructions. the other variable length 6-byte SS instructions have both the source operand length and the destination operand length identical. translate and translate&test instructions have a table as operand that uses each byte from the source to index. The assumption was that the table was automatically 256 bytes and therefor instruction pre-test would do check for valid for start-of-table plus 255.

it turns out that page fault when cross page boundary ... there is possibility of storage protect scenario on page boundary crossing. that coupled with some applications that would do translate on subset of possible values ... and only built a table that was much smaller than 256 bytes. If the table was at the end of a page ... abutting a storage protect page ... the end-of-table precheck could fail ... even tho the translate data would never actually result in reference to protected storage.

so now, translate and translate&test instructions have a pretest if the table is within 256 bytes of page boundary ... if not ... it executes as it has since 360 days. if the target table is within 256 bytes of the end of page ... it may be necessary to execute the instruction incrementally, byte-by-byte (more like mvcl/clcl)

--
virtualization experience starting Jan1968, online at home since Mar1970

RISC design, was What will Microsoft use its ARM license for?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: RISC design, was What will Microsoft use its ARM license for?
Newsgroups: comp.arch
Date: Tue, 10 Aug 2010 09:59:36 -0400
John Levine <johnl@iecc.com> writes:
Meanwhile in a small town in suburban New York, the 801 project was using the PL.8 compiler with state of the art analysis and optimization. It dealt just fine with somewhat irregular instruction sets (it generated great code for S/360) and had its registers firmly under control, so the 801 reflected that.

for the fun of it ... old email with comparison between various pascals and pl.8 with pascal front-end (on same pascal program) ... includes some with execution on the same 3033 (a 4.5mip "high-end" 370):
https://www.garlic.com/~lynn/2006t.html#email810808

in this past post
https://www.garlic.com/~lynn/2006t.html#9 32 or even 64 registers for x86-64?

with misc. other old emails mentioning 801:
https://www.garlic.com/~lynn/2006t.html#email781128
https://www.garlic.com/~lynn/2006t.html#email790607
https://www.garlic.com/~lynn/2006t.html#email790711
https://www.garlic.com/~lynn/2006t.html#email811104

--
virtualization experience starting Jan1968, online at home since Mar1970

A Bright Future for Big Iron?

From: lynn@garlic.com (Lynn Wheeler)
Date: 10 Aug, 2010
Subject: A Bright Future for Big Iron?
Blog: MainframeZone
big iron has steep learning curve and big upfront costs & expenses. there has been lots of stuff written about drop-off in educational discounts in the 70s ... and institutions of higher learning moving to other platforms. as a result, when the graduates from that period started coming to age ... their familiarity was with other platforms.

another explanation was that it was much easier for departments to cost justify their own mid-size or personal computers ... and with much simpler learning curve ... it was easier to structure a course around some material and show productive results.

recent posts in thread discussing big explosion in mid-range market in the late 70s (much lower up-front costs, much better price/performance) with reference to decade of numbers ... showing mid-range moving to workstations and larger PCs by the mid-80s.
https://www.garlic.com/~lynn/2010m.html#25
https://www.garlic.com/~lynn/2010m.html#32

however, a big part of the mid-range 43xx numbers was move into distributed computing with large corporations having orders of several hundred 43xx machines at a time. even at internal locations ... there was big explosion in 43xx machines ... which contributed to the scarcity of conference rooms during the period; department conference rooms being converted to house department 43xx machines ... example was Santa Teresa lab with every floor in every tower getting 43xx machine.

The explosion in internal 43xx machines also was major factor in the internal network passing the 1000 node mark in 1983 ... aka internal network was larger than the arpanet/internet from just about the beginning until possibly late '85 or early '86. some past posts mentioning internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

During the late 80s ... with proliferation of distributed computing on workstations and PCs ... the limited connectivity into the datacenter played big role. There were several products vastly improving big iron distributed computing (from the disk division), which were blocked by the communication division ... claiming strategic responsibility (i.e. it threatened the communication division terminal emulation install base). Finally at the annual world-wide communication division internal conference, one of the senior disk engineers got a talk scheduled ... and opened it with the statement that the communication division was going to be responsible for the demise of the disk division. some past posts mentioning terminal emulation
https://www.garlic.com/~lynn/subnetwork.html#emulation

--
virtualization experience starting Jan1968, online at home since Mar1970

A Bright Future for Big Iron?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 10 Aug, 2010
Subject: A Bright Future for Big Iron?
Blog: MainframeZone
re:
https://www.garlic.com/~lynn/2010m.html#36 A Bright Future for Big Iron?

a large reservoir of big iron has been the financial industry ... back ends with huge amounts of financial data. in the 90s, this was threatened by "killer micros".

in the 70s & 80s, backend batch processes had online front-end financial transactions added ... but the transactions still weren't actual finalized until the batch cobol in overnight batch. in the 90s, there was severe stress on the overnight batch window with increasing work and globalization decreasing length of overnight batch window. numerous financial institutions spent billions to re-engineer the backend for straight-through processing (i.e. each transaction runs to completion eliminating the overnight batch window processing). The scenario was parallel processing on large number of "killer micros" which would offset the increased overhead involved in moving off batch. However, there was little upfront speeds&feeds work done ... so it wasn't until late in deployments that they discovered the technologies (they were using), had overhead inflation of 100 times (compared to cobol batch), totally swamping the anticipated thruput from large numbers of parallel killer micros.

The failures resulted in huge retrenchment and very risk adverse environment in the financial industry that still lingers on (contributed significantly to preserving major portion of existing big iron market). A couple years ago there was attempts to interest the industry in brand-new real-time, straight-through transaction processing technology ... which only had 3-5 times the overhead of batch cobol (easily within the price/performance of parallel "killer micros") ... but the spectre of the failures in the 90s was still casting a dark shadow on the industry.

some recent posts mentioning overnight batch window & straight-through processing efforts
https://www.garlic.com/~lynn/2010.html#77 Korean bank Moves back to Mainframes (...no, not back)
https://www.garlic.com/~lynn/2010b.html#16 How long for IBM System/360 architecture and its descendants?
https://www.garlic.com/~lynn/2010b.html#19 STEM crisis
https://www.garlic.com/~lynn/2010e.html#77 Madoff Whistleblower Book
https://www.garlic.com/~lynn/2010f.html#56 Handling multicore CPUs; what the competition is thinking
https://www.garlic.com/~lynn/2010g.html#37 16:32 far pointers in OpenWatcom C/C++
https://www.garlic.com/~lynn/2010h.html#47 COBOL - no longer being taught - is a problem
https://www.garlic.com/~lynn/2010h.html#78 Software that breaks computer hardware( was:IBM 029 service manual )
https://www.garlic.com/~lynn/2010i.html#41 Idiotic programming style edicts
https://www.garlic.com/~lynn/2010k.html#3 Assembler programs was Re: Delete all members of a PDS that is allocated
https://www.garlic.com/~lynn/2010l.html#14 Age

--
virtualization experience starting Jan1968, online at home since Mar1970

U.K. bank hit by massive fraud from ZeuS-based botnet

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 11 Aug, 2010
Subject: U.K. bank hit by massive fraud from ZeuS-based botnet
Blog: Financial Crime Risk, Fraud and Security
U.K. bank hit by massive fraud from ZeuS-based botnet
http://www.networkworld.com/news/2010/081010-zeus-trojan-raids-3000-uk.html

from above:
Security vendor M86 Security says it's discovered that a U.K.-based bank has suffered almost $900,000 (675,000 Euros) in fraudulent bank-funds transfers due to the ZeuS Trojan malware that has been targeting the institution

... snip ...

some other

Malware gang steal over 700K pds from one British bank; Cybercrooks scoop cash from thousands of accounts
http://www.theregister.co.uk/2010/08/10/zeus_uk_bank_ripoff/
Zeus Trojan raids 3,000 UK bank accounts; Banks and antivirus powerless to stop attacks
http://www.networkworld.com/news/2010/081710-ibm-unleashes-256-core-unix-server.html

this has diagram/graphics of operation

Zeus Trojan steals $1 million from U.K. bank accounts
http://news.cnet.com/8301-27080_3-20013246-245.html

...

We had been brought in as consultants to small client/server startup that wanted to do payments on their server. The startup had also invented this technology they called SSL, they wanted to use. The result of that work is now frequently called electronic commerce. There were some number of requirements regarding how SSL was deployed and used that were almost immediately violated.

Somewhat as a result of that work, in the mid-90s, we were invited to participate in the x9a10 financial standard working group that had been given the requirement to preserve the integrity of the financial infrastructure for all retail payments. The result was the x9.59 standard for debit, credit, stored-value, electronic ACH, high-value, low-value, point-of-sale, internet, unattended, face-to-face, contact, contactless, wireless, transit-turnstile, ... aka ALL. It also slightly tweaked the paradigm so it eliminated the threats from evesdropping, skimming and data breaches (it didn't eliminate such activity, it just eliminated the threat/risk that crooks could use the information to perform fraudulent transactions).

About the same time, there were a number of presentations by consumer dial-up online banking operations about motivation for the move to internet (eliminate significant customer support costs with proprietary dial-up infrastructures). At the same time, the commercial/business dial-up online cash-management/banking operations were making presentations that they would never move to the internet (because of the risks and vulnerabilities involved, even with SSL).

for a long time, it has been widely understood that PCs are easily compromised in large number of different ways. while x9.59 standard addressed the issue of crooks being able to use information to independently perform fraudulent transactions ... there were also a class of PC compromises where the user was convinced to authenticate/authorize a transaction that was different than what they believed it to be.

somewhat as a result, in the last half of the 90s, there was the EU FINREAD standard ... which was an independent box attached to the PC that had independent display and input and would generate authorized transaction that would run end-to-end (from FINREAD to the financial institution). The compromised PC could still do DOS ... but FINREAD eliminated an additional class of fraud involving compromised PCs (in theory a "locked down" cellphone/PDA might provide similar functionality, wirelessly).
https://www.garlic.com/~lynn/subintegrity.html#finread

the problem was that in the early part of this century, there was large pilot involving device that attached to PC thru the serial-port, that provided authenticated transaction capability. This quickly ran into enormous consumer support costs with serial-port conflicts ... resulting in widely spreading pervasive opinion in the financial industry, that such attachments weren't practical in the consumer market. This resulted in nearly all such programs evaporating (including EU FINREAD).

However, the issue wasn't with the attached technology ... it was with using serial-port interface. This was also a major issue with the consumer dial-up proprietary online banking moving to the internet (all the consumer support problems with serial-port dial-up modems). Apparently all the institutional knowledge regarding serial-port conflicts and enormous consumer support issues managed to evaporate in a five year period. Also, it didn't need to have occurred, by that time, there was USB and a major part of USB was to address all the serial-port issues and problems ... and the pilot might have used USB attachment rather than serial-port.

oh, recently there has been some recommendations that businesses that do online internet banking ... get a PC that is dedicated to the function and *NEVER* used for anything else; this would at least address some of the issues raised in the mid-90s why commercial/business dial-up online banking was never going to move to the internet (at least that was what they were saying at the time).

--
virtualization experience starting Jan1968, online at home since Mar1970

CPU time variance

Refed: **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: CPU time variance
Newsgroups: bit.listserv.ibm-main
Date: 11 Aug 2010 12:04:32 -0700
eamacneil@YAHOO.CA (Ted MacNEIL) writes:
The simple explanation is, during one of the MVS/SP1.x releases, some things that were done in disabled mode, and under SRB reported CPU, were done in disabled mode and TCB which was allocated to the last active task.

MVS didn't actually directly account for lots of activity ... so percent processor activity captured could possibly be as low as 40% (of total processor activity). Some of the subsystem intensive operations ... that attempted to do everything possible to avoid MVS ... could get captured processor activity up to 80% or so (only 20% of the MVS processor cycles unaccounted for). Operations that billed for usage would potentially take overall capture ratio (as percent of overall total processor usage) ... and prorate all usage by that amount.

VM accurately accounted for all usage (didn't have the concept of things like "uncaptured" processor usage). The variability that VM operation might have would be things like concurrent activity resulting in task switching rates ... which affect things like cache hit ratio.

Processor thruput (number of instructions executed per second) is sensitve to processor cache hit ratios ... actually frequency that instructions stall waiting for data to be fetched from main storage/memory (because the data isn't in the cache). With increasing processor speeds w/o corresponding increase in storage/memory speeds ... there is increased mismatch between processor execution speeds and the slow-down that happens when waiting for data not in the cache.

When there is lots of task switching going on ... much of the data in the processor cache may have to be changed in the switch from one task to another ... resulting in very high cache miss rate during the transition period (significantly slowing down the effective processor execution rate and increasing the corresponding measured CPU utilization to perform some operation).

Over the past couple decades ... the instruction stall penalty has become more and more significant ... resulting in lots of new technologies attempting to mask the latency in waiting for data to be fetched from memory/storage (on cache miss) & keep processor doing something ... aka things like deep pipelining and out-of-order execution ... which may then also result in branch prediction and speculative execution.

in any case, variability in task switching and other concurrent activity (that might occur in vm environment) ... can result in variability in cache miss rate ... and therefor some variability in effective instructions executed per second (i.e. variabiilty in elapsed cpu used to perform some operation).

LPAR activity (effectively a subset of VM function in the hardware) could also have similar effects on cache miss rates.

--
virtualization experience starting Jan1968, online at home since Mar1970

Oracle: The future is diskless!

From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Oracle: The future is diskless!
Newsgroups: bit.listserv.ibm-main
Date: 11 Aug 2010 12:21:36 -0700
shmuel+ibm-main@PATRIOT.NET (Shmuel Metz , Seymour J.) writes:
Is that anything like thin film replacing core? Or bubbles?

Predicting that a technology will be supplanted is easy. Accurately predicting what will replace it and when is hard.


lot of DBMS are disk-centric and based on the home position for data is located on disk ... and real memory/storage is used to cache records.

with increase in real memory sizes, lots of databases can completely fit in real storage. there has been work on "in memory" databases ... big motivation was the telco industry ... being able to handle call record volumes. there were some benchmarks of these "in memory" databases that might use disks for sequential writing involving logging/recovery ... against standard DBMS that had enough memory/storage to completely cache the full DBMS ... and the "in memory" database still got ten times the thruput ... than disk-centric DBMS (even with all data cached).

In the middle/late 90s, there was prediction that the telcos would take over the payment industries ... because the telcos were the only operations (based on scale-up to handle call-record volumes) that were capable of handling the predicted volumes in micro-payments. Once firmly entrenched handling micro-payment volumes ... they would then move upstream to take over the rest of the payment industry. Well, the micro-payment volumes never materialized ... and the telco industry has yet to take over the payment industry.

However since then, Oracle has acquired at least one of the "in-memory" DBMS implementations ... and some of the payment processors have deployed backend platforms originally developed to handle the telco call record volumes.

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3883 Manuals

Refed: **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: IBM 3883 Manuals
Newsgroups: bit.listserv.ibm-main
Date: 11 Aug 2010 16:04:03 -0700
wmhblair@COMCAST.NET (William H. Blair) writes:
I've never seen this documented. But I never looked that deeply, either, so it might have been in my face since 1981. Regardless, I was told it was only for purposes of allocating space on the actual track -- AS IF the device actually wrote 32-byte (the cell size) physical blocks (or multiples thereof). At the time, prior to PCs, this meant nothing special to me. Of course fixed sector sizes for PC drives made more sense, and I assumed the underlying 3380 and 3375 hardware, like the 3370, used a fixed block [or sector] size, which obviously had to a multiple of 32. Later, I was told (by IBM) that this was, in fact, the case.

i remember that the 32byte data error correcting block was also the physical block on disk ... but I haven't found a documented reference to that effect.

as I mentioned before ... 3380 was high-end ... while 3370 was considered mid-range.

there was big explosion in the mid-range market with 43xx machines ... which MVS didn't fit well into. possibly somewhat as part of helping MVS possibly sell into that 43xx/midrange exploding market ... there was 3375 ... which emulated CKD on 3370 (there was also a major effort to get the MVS microcode assist from the 3033, required by all the new releases of MVS, implemented on 4341). recent post mentioning 3375 gave MVS a CKD disk for mid-range:
https://www.garlic.com/~lynn/2010l.html#13 Old EMAIL Index

at the time, 3310/3370 were sometimes referred to as FBA-512 (rather than simply FBA) ... implying plans for moving to even larger block sizes. recent post referencing current FBA is looking at moving from 512 (with 512 byte data block error correcting) to 4096 (with 4096 byte data block error correcting)
https://www.garlic.com/~lynn/2010m.html#1 History of Hard-coded Offsets

the above has some articles about various FBA-512 compatibility/simulation efforts (for FBA-4096) ... very slightly analogous to CKD simulation on FBA.

other recent posts mentioning fba, ckd, etc
https://www.garlic.com/~lynn/2010k.html#10 Documenting the underlying FBA design of 3375, 3380 and 3390?
https://www.garlic.com/~lynn/2010k.html#17 Documenting the underlying FBA design of 3375, 3380 and 3390?
https://www.garlic.com/~lynn/2010l.html#76 History of Hard-coded Offsets

Of course, VM had native FBA support and didn't have problems with low-end and mid-range markets. Part of this was that both VM kernel paging ... and the CMS filesystem organization (dating back to the mid-60s) have been logical FBA ... even when having to deploy on physical CKD devices.

I've made past references that with the demise of FS that there was mad rush to get stuff back into the 370 hardware & software pipeline ... and it was going to take possibly 7-8 yrs to get XA (starting from time FS was killed). The MVS/XA group managed to make the case that VM370 product had to be killed, the development group shutdown and all the people moved to POK in order for MVS/XA to make its ship schedule (endicott managed to save the vm370 product mission, but had to reconstitute a development group from scratch). misc. past posts mentioning future system effort
https://www.garlic.com/~lynn/submain.html#futuresys

Later in the 80s, POK decided to come out with vm/xa "high-end" product. However, there was some interesting back&forth about whether VM/XA would include FBA support ... they were under pressure to state that CKD was much better than FBA and that was why FBA support wasn't needed (as part of supporting MVS lack of FBA support).

misc. past posts mentioning the disk division would let me come over and play disk engineer in bldgs. 14 (engineering lab) & 15 (product test lab)
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3883 Manuals

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 3883 Manuals
Newsgroups: bit.listserv.ibm-main
Date: Wed, 11 Aug 2010 20:40:53 -0400
rfochtman@YNC.NET (Rick Fochtman) writes:
At Clearing, we ran MVS very nicely on three 4341 Model Group 2 boxen for three years and it ran very nicely. Nowdays, my pocket calculator probably has more raw compute power but the fact remains that we were very happy with the equipment, until our workload grew beyond their capacity to process it. IIRC, the DASD farm was a mix of 3330-11's and 3350's. Talk about ancient.....

4341 was approx. 1+mip processor ... much better price/performance than (slower) 3031 ... and cluster of 4341s were cheaper, better price/performance and had higher aggregate mip rate than 3033 (there were some internal politics trying to protect high-end sales from being eaten by mid-range).

bldg. 14&15 were running stand-alone processors for testcell testings. at one point they tried to run with MVS ... but found MVS had 15min MTBF (fail, hard loop, something requiring reboot). I offered to rewrite IOS to make it bullet proof and never fail ... allowing them to do ondemand, anytime, multiple concurrent testing (from around the clock, 7x24 scheduled stand alone test time) ... significantly increasing development productivity. this also got me dragged into various sorts of solving other kinds of problems ... as well as being dragged into conference calls with channel engineers back east.

so bldg. 14&15 typically got the next processor off the line ... after the processor engineers (4341s, 3033s, etc). one of the benefits of having provided the environment for engineering ... I could have much of the remaining processor time (not required by development testing ... which was very i/o intensive and rarely accounted for more than couple percent ... even with multiple concurrent testcells).

i had worked with some of the endicott performance people on ecps microcode assist ... originally for 138/148 ... old thread with part of process that decided what was to be included in ecps.
https://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/94.html#27 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/94.html#28 370 ECPS VM microcode assist

in any case, in early 4341 time-frame, I had better 4341 access than most of the people in the endicott 4341 organization ... and would sometimes run performance benchmarks for various organizations. old email with some results for national lab benchmarks (that was talking about ordering 70 machines)
https://www.garlic.com/~lynn/2006y.html#email790212
https://www.garlic.com/~lynn/2006y.html#email790212b
https://www.garlic.com/~lynn/2006y.html#email790220
in this post
https://www.garlic.com/~lynn/2006y.html#21 moving on

other old 43xx related email from the period
https://www.garlic.com/~lynn/lhwemail.html#4341

for totally unrelated ... there was recent thread about (mid-90s) having rewrote major portion of airline reservation so that it ran possibly 100 times faster and sized it so that it could handle every reservation for every airline in the world. as noted in the thread ... the aggregate MIP rate sizing is about equivalent to what is available in cellphone today
https://www.garlic.com/~lynn/2010b.html#80 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010c.html#19 Processes' memory

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3883 Manuals

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: IBM 3883 Manuals
Newsgroups: bit.listserv.ibm-main
Date: 11 Aug 2010 18:15:35 -0700
rfochtman@YNC.NET (Rick Fochtman) writes:
At Clearing, we ran MVS very nicely on three 4341 Model Group 2 boxen for three years and it ran very nicely. Nowdays, my pocket calculator probably has more raw compute power but the fact remains that we were very happy with the equipment, until our workload grew beyond their capacity to process it. IIRC, the DASD farm was a mix of 3330-11's and 3350's. Talk about ancient.....

re:
https://www.garlic.com/~lynn/2010m.html#41 IBM 3883 Manuals
https://www.garlic.com/~lynn/2010m.html#42 IBM 3883 Manuals

... group 2 was faster machine introduced later ... however, if you were running with (existing?) DASD farm with mix of 3330-11s and 3350s ... it was possibly upgrade of existing 370 machine (possibly single 158 to three 4341 ... or maybe from a single 168?). it might have even been an pre-existing MVS (that didn't require the new 3033 mvs microcode assist) ... and likely within a traditional looking datacenter.

... but this wasn't a couple machines ... part of the explosion in mid-range were customers buying hundreds of 4341s at a time (which required new processors as well as new mid-range dasd) ... and placing them all over the business ... converting deparmental conference rooms and supply rooms for 4341s with "mid-range" dasd .... not exactly someplace to load up a lot of 3380 disks cabinets (designed for the datacenter).

nother trivial example was the internal service processor for 3090 ... it was a pair of (vm) 4361s with FBA (3090 service panels were actually cms ios3270 screens)

internally installations, that vm/4341 mid-range explosion contributed to scarcity of conference rooms ... with places like the santa teresa labs putting vm/4341s on every floor in every tower.

the 3880 controller group in san jose, ran a huge microcode design application on collection of 168s in bldg. 26 datacenter. The datacenter was crammed to the walls and there wasn't any place to add more machines. They started looking at also putting lots of these vm/4341s out into all the departmental nooks & crannies ... as a means of deliverying a lot more processing power to their development organization (again all new installations requiring new mid-range dasd).

one of the things started in the 90s was very aggressive physical packaging effort to configure ever increasing numbers of processors in the smallest amount of floor space ... this was somewhat done to get large numbers of the explosion in departmental and distributed processors back into the datacenter.

Much of the early work had gone to national labs (like LLNL) and high energy physics labs (as GRID computing). It has also recently morphed into "cloud computing" ... somewhat marrying massive cluster scale-up with the 60&70s mainframe online commercial timesharing ... much of it virtual machine based (starting with cp67 and then morphing into vm370) ... one of the largest such (mainframe virtual machine) operations was the internal (virtual machine based) online HONE system providing world-wide sales & marketing support ... misc. old HONE references
https://www.garlic.com/~lynn/subtopic.html#hone

reference to jan92 meeting regarding early work on increasing processor density (number of processor per sq. ft).
https://www.garlic.com/~lynn/95.html#13

and some old email from that period on the subject
https://www.garlic.com/~lynn/lhwemail.html#medusa

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3883 Manuals

From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: IBM 3883 Manuals
Newsgroups: bit.listserv.ibm-main
Date: 12 Aug 2010 06:43:49 -0700
re:
https://www.garlic.com/~lynn/2010m.html#41 IBM 3883 Manuals
https://www.garlic.com/~lynn/2010m.html#42 IBM 3883 Manuals
https://www.garlic.com/~lynn/2010m.html#43 IBM 3883 Manuals

for other 3380 related info ... this old email talks about track spacing (being 20 track widths on original 3380 and being reduced to 10 track widths on double density 3380s):
https://www.garlic.com/~lynn/2006s.html#email871122

in this past post
https://www.garlic.com/~lynn/2006s.html#30 Why magnetic drums was/are worse than disks ?

the above post also contains this email
https://www.garlic.com/~lynn/2006s.html#email871230

where the father of 801/risc wanted me to help pitch a disk head proposal to some executives.

--
virtualization experience starting Jan1968, online at home since Mar1970

Basic question about CPU instructions

From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Basic question about CPU instructions
Newsgroups: bit.listserv.ibm-main
Date: 12 Aug 2010 10:46:03 -0700
shmuel+ibm-main@PATRIOT.NET (Shmuel Metz , Seymour J.) writes:
Certainly for the ES/9000 and 43xx processors; I'm not sure about, e.g., the 370/145.

late 70s there was start of effort to move the large variety of internal microprocessors to 801/risc (iliad chips) ... this included the follow-ons to 4331/4344 (i.e. 4361/4381), the as/400 follow-on to the s/38 ... and a lot of other internal microprocessors.

various issues cropped up with iliad chips ... and the effort was abandoned ... 4361/4381 doing their own custom cisc chip, crash project to do cisc chip for as/400 (decade later, as/400 did move to varient of 801/risc power/pc chip), etc. in the wake of abandoning that effort, some number of 801/risc engineers leave and show up on risc efforts at other vendors.

i contributed some to the whitepaper that killed the effort for 4381. low/mid range were veritical microcode processors simulating 370 ... somewhat akin to current day Hercules effort on intel processors. The idea was to move to common 801/risc for microprocessors ... minimizing the duplication of effort around the corporation developing new chips and associated (microcode) programming environment.

The whitepaper claims were that cisc technology had gotten to the stage where much of 370 instructions could be implemented directly in circuits (rather than emulated in microcode). That even with higher mip rate of 801/risc, there was still approx. 10:1 instruction emulation overhead (needed 20mip microprocessor to get 2mip 370) ... while cisc chip might only be 3-5 mips ... quite a bit of that could be 370 instructions nearly native in the chip.

small piece from that whitepaper:
- The 4341MG1 is about twice the performance of a 3148. Yet the 4341MG1's cycle time is only about 1.4 times faster than the 3148's. The rest of the performance improvement comes from applying more circuits to the design.

- The 4341MG2 is about 1.6 times faster than the 4341MG1. Yet the 4341MG2's cycle time is 1.4 times faster than the 4341MG1's. Once again, performance can be attained through more circuitry, not just faster circuitry.

- The 3031 is about 1.2 times faster than the 3158-3. Yet the 3031's cycle time is the same as the 3158-3's.


... snip ...

previous reference to benchmark with 4341MG1 slightly faster than 3031
https://www.garlic.com/~lynn/2010m.html#42 IBM 3883 Manuals

the 3031 reference is slight obfuscation. the 158-3 was single (horizontal microcode processor) engine shared between the 370 microcode and the integrated channel microcode. the 3031 was 158-3 with two processor engines ... one dedicated to running 370 microcode (w/o the integrated channel microcode) and one dedicated to the "303x channel director" running the integrated channel microcode (w/o the 370 microcode).

ecent reference to 158 engine with integrated channel microcode was used for 303x channel director for all 303x processors (i.e. 3031 was 158-3 repackaged to use channel director, 3032 was 168-3 repackaged to use channel director, and 3033 started out as 168-3 wiring diagram using 20% faster chip with channel director)
https://www.garlic.com/~lynn/2010m.html#15 History of Hard-coded Offsets

when 4361/4381 did come out ... there was some expectation that it would continue the explosion in mid-range sales that started with 4331/4341 (at the end of the 70s) ... however by then, the mid-range market was starting to move to workstations and large PCs (servers). a couple recent posts discussing the explosion in the mid-range market ... and then mid-range moving to workstations and large PCs:
https://www.garlic.com/~lynn/2010m.html#25 Idiotic programming style edicts
https://www.garlic.com/~lynn/2010m.html#32 Idiotic programming style edicts
https://www.garlic.com/~lynn/2010m.html#36 A Bright Future for Big Iron?
https://www.garlic.com/~lynn/2010m.html#41 IBM 3883 Manuals
https://www.garlic.com/~lynn/2010m.html#43 IBM 3883 Manuals

--
virtualization experience starting Jan1968, online at home since Mar1970

optimizing compilers

From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: optimizing compilers
Newsgroups: bit.listserv.ibm-main
Date: 13 Aug 2010 06:51:46 -0700
john_w_gilmore@MSN.COM (john gilmore) writes:
A good first reference is:

F. J. Allen and John Cocke, "A catalog of optimizing transformations", Courant Computer Science Symposium 5, Upper Saddle River, NJ: Prentice-Hall, 1977, pp. 1-30.


John's invention of 801/risc ... I've frequently claimed was to take hardware in the opposite direction of the (failed) future system effort.
https://www.garlic.com/~lynn/submain.html#futuresys

Part of the 801 story was that the simplification of 801 hardware could be compensated for by sophistication in the software; pli.8 compiler and cp.r monitor. for the fun of it ... recent reference to old benchmark numbers of pl.8 with pascal frontend against pascal/vs (on 3033)
https://www.garlic.com/~lynn/2010m.html#35 RISC design, was What will Micrsoft use its ARM license for?

to this old email
https://www.garlic.com/~lynn/2006t.html#email810808

other recent posts w/reference to 801/risc
https://www.garlic.com/~lynn/2010l.html#39 Age
https://www.garlic.com/~lynn/2010l.html#42 IBM zEnterprise Announced
https://www.garlic.com/~lynn/2010m.html#33 What will Microsoft use its ARM license for?
https://www.garlic.com/~lynn/2010m.html#45 Basic question about CPU instructions

recent reference to John wanting me to go with him on disk head pitch:
https://www.garlic.com/~lynn/2010m.html#44 IBM 3883 Manuals

--
virtualization experience starting Jan1968, online at home since Mar1970

OT: Found an old IBM Year 2000 manual

From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: OT:  Found an old IBM Year 2000 manual.
Newsgroups: bit.listserv.ibm-main
Date: 13 Aug 2010 07:21:07 -0700
richbourg.claude@MAIL.DC.STATE.FL.US (Richbourg, Claude) writes:
I was cleaning out my office today and found an old IBM manual from February 1998:

The Year 2000 and 2-Digit Dates:

A Guide for Planning and Implementation GC28-1251-08


in the early 80s, one of the online conferences on the internal network was discussions about the upcoming problems with dates and the end of the century (CENTURY forum). i've periodicly reposted somebody's contribution regarding other kinds of computer date problems that they had encountered (person was at nasa houston) ...
https://www.garlic.com/~lynn/99.html#24 BA Solves Y2k (Was: Re: Chinese Solve Y2k)
https://www.garlic.com/~lynn/2000.html#94 Those who do not learn from history...
https://www.garlic.com/~lynn/2003p.html#21 Sun researchers: Computers do bad math ;)
https://www.garlic.com/~lynn/2006r.html#16 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006u.html#35 Friday fun - Discovery on the pad and the software's not done
https://www.garlic.com/~lynn/2009n.html#53 Long parms...again

misc. past posts mentioning internal network (was larger than arpanet/internet from just about the beginning until possibly late '85 or early '86)
https://www.garlic.com/~lynn/subnetwork.html#internalnet

recent thread in linkedin science center alumni blog about the invention of the internal network:
https://www.garlic.com/~lynn/2010l.html#74 CSC History
https://www.garlic.com/~lynn/2010l.html#84 CSC History
https://www.garlic.com/~lynn/2010m.html#22 CSC History
https://www.garlic.com/~lynn/2010m.html#26 CSC History
https://www.garlic.com/~lynn/2010m.html#28 CSC History

--
virtualization experience starting Jan1968, online at home since Mar1970

A New U.S. Treasury Rule Would Add Millions to Prepaid Ranks

From: lynn@garlic.com (Lynn Wheeler)
Date: 13 Aug, 2010
Subject: A New U.S. Treasury Rule Would Add Millions to Prepaid Ranks
Blog: Payment Systems Network
is the situation that when you click on the linkedin URL ... it doesn't bring up the actual article?

Linkedin tries to "frame" posted articles/URLs ... sometimes when clicking on the linkedin flavor of a posted URL ... you get a webpage that just has the linkedin header ... w/o the actual article appearing in the frame below. Sometimes this is a browser configuration issue ... and sometimes it has been a linkedin issue. as posted above, the original posted "real" URL is (w/o the linkedin "wrapper"):

A New U.S. Treasury Rule Would Add Millions to Prepaid Ranks
http://www.digitaltransactions.net/newsstory.cfm?newsid=2608

the linkedin flavor is:
http://www.linkedin.com/news?viewArticle=&articleID=170145293&gid=50424&type=member&item=27032764&articleURL=http%3A%2F%2Fwww%2Edigitaltransactions%2Enet%2Fnewsstory%2Ecfm%3Fnewsid%3D2608&urlhash=OZZg

if you do search engine on the article title ... many of the other URL/stories ... reference the digitaltransactions URL.

an earlier digitaltransactions URL (also posted earlier to this same linkedin group)

Little-Noted, Prepaid Rules Would Cover Non-Banks As Well As Bank
http://www.digitaltransactions.net/newsstory.cfm?newsid=2604

linkedin version:

http://www.linkedin.com/news?viewArticle=&articleID=166604681&gid=50424&type=member&item=26595540&articleURL=http%3A%2F%2Fwww%2Edigitaltransactions%2Enet%2Fnewsstory%2Ecfm%3Fnewsid%3D2604&urlhash=yvXl

the above references not just US Treasury ... but FinCen (within the US Treasury). Another news URL with some additional information:

Proposed Treasury rules take hard line against prepaid card fraud; Move to help take on terrorist funding may impact average consumers, too
http://www.creditcards.com/credit-card-news/fincen-study-prepaid-gift-card-suspicious-activity-report-required-1282.php

which has a URL to this FinCen PDF file:
http://www.heritage.org/Research/EnergyandEnvironment/EM74.cfm

There have been a number of somewhat related news stories about money laundering being traced back to some of the too-big-to-fail financial institutions (the gov. was following the money trail used to buy some airplanes used in drug smuggling ... and it led back to some large US financial institutions). One point of those articles was that the gov. has been doing so much to try and keep the associated (too-big-to-fail) institutions afloat ... that rather than prosecuting, throwing the executives in jail, and shutting down the institutions ... the institutions have been asked to stop doing it.

--
virtualization experience starting Jan1968, online at home since Mar1970

Announcement from IBMers: 10000 and counting

From: lynn@garlic.com (Lynn Wheeler)
Date: 13 Aug, 2010
Subject: Announcement from IBMers: 10000 and counting...
Blog: IBMers
I had started corporate online telephone book and email address files in the late 70s ... I thought it was neat when my list of (internal) email addresses passed 10,000. In the early days ... it was ok to have both external and internal phone numbers on your business card; then some of us started adding our internal email address to our business cards. The corporation then came out with guideline that business cards were only for customers and internal contact information wasn't allowed on business cards. However, by that time, we had gateway to the arpanet/internet and a few of us could put our external email address on our business cards.

One of the frequent discussions from 1980 was about how the majority of the corporation weren't computer users ... especially the executives. It was believed that it might vastly improve decisions if large percent of employees actually used computers ... and what kind of inducement could we come up with to attract corporate employees to using computers.

HONE had addressed some of that after the 23jun69 unbundling announcement to provide world-wide online access to sales & marketing (after US HONE datacenters were consolidated in northern cal. ... the number of US HONE ids passed 30,000 in the late 70s). Part of this was increasing use of HONE AIDS in sales&marketing support ... including not being able to even submit mainframe orders that hadn't been processed by various HONE applications.

misc. past posts mentioning HONE
https://www.garlic.com/~lynn/subtopic.html#hone

--
virtualization experience starting Jan1968, online at home since Mar1970

Has there been a change in US banking regulations recently?

Refed: **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 13 Aug, 2010
Subject: Re: Has there been a change in US banking regulations recently?
MailingList: Cryptography
On 08/13/2010 02:12 PM, Jon Callas wrote:
Possibly it's related to PCI DSS and other work that BITS has been doing. Also, if one major player cleans up their act and sings about how cool they are, then that can cause the ice to break.

Another possibility is that a number of people in financials have been able to get security funding despite the banking disasters because the risk managers know that the last thing they need is a security brouhaha while they are partially owned by government and thus voters.

I bet on synergies between both.

If I were a CSO at a bank, I might encourage a colleague to make a presentation about how their security cleanups position them to get an advantage at getting out from under the thumb of the feds over their competitors. Then I would make sure the finance guys got a leaked copy.

Jon


the original requirement for SSL deployment was that it was on from the original URL entered by the user. The drop-back to using SSL for only small subset ... was based on computational load caused by SSL cryptography .... in the online merchant scenario, it cut thruput by 90-95%; alternative to handle the online merchant scenario for total user interaction would have required increasing the number of servers by factor of 10-20.

One possibility is that the institution has increased the server capacity ... and/or added specific hardware to handle the cryptographic load.

A lot of banking websites are not RYO (roll-your-own), internally developed ... but stuff they buy from vendor and/or have the website wholly outsourced.

Also some number of large institutions have their websites outsourced to vendors with large replicated sites at multiple places in the world ... and users interaction gets redirected to the closest server farm. I've noticed this periodically when the server farm domain name and/or server farm SSL certificate bleeds thru ... because of some sort of configuration and/or operational problems (rather than seeing the institution SSL certificate that I thot I was talking to).

Another possibility is that the vendor product that they may be using for the website and/or the outsourcer that is being used ... has somehow been upgraded (software &/or hardware).

--
virtualization experience starting Jan1968, online at home since Mar1970

Has there been a change in US banking regulations recently?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 13 Aug, 2010
Subject: Re: Has there been a change in US banking regulations recently?
MailingList: Cryptography
On 08/13/2010 03:16 PM, Chris Palmer wrote:
When was this *ever* true? Seriously.

re:
https://www.garlic.com/~lynn/2010m.html#50

... original design/implementation. The very first commerce server implementation by the small client/server startup (that had also invented "SSL") ... was mall paradigm, development underwritten by large telco (they were looking at being major outsourcer of electronic commerce servers) ... then the individual store implementation was developed.

we had previously worked with two people responsible for commerce server (at small client/server startup) on ha/cmp ... they are mentioned in this old posting about jan92 meeting in ellison's conference room
https://www.garlic.com/~lynn/95.html#13

they then left to join the small client/server startup ... and we also leave what we had been doing. we then get brought in as consultants because they want to do payment transactions on their server ... wanting to use this technology called "SSL" that had been invented at the startup. We have to go thru the steps of mapping the technology to payment business processes ... including backend use involving the interaction between commerce servers and the payment gateway; the payment gateway sitting on the internet and interface to acquiring network backends ... misc. past posts mentioning payment gateway
https://www.garlic.com/~lynn/subnetwork.html#gateway

we also have to do walkthru/audits of several of these new businesses calling themselves Certification Authorities that were selling SSL domain name digital certificates ... some past posts
https://www.garlic.com/~lynn/subpubkey.html#sslcerts

approx. in the same era, but not exactly the same time (when webservers were seeing the ssl cryptographic load & dropping back to only using it for payment) ... some of the larger websites were starting to first see a "plain" tcp/ip scale-up issue ... having to do with tcp being originally designed as session protocol ... and was effectively being misused by HTTP. As a result most vendor implementations hadn't optimized session termination ... which was viewed as infrequent event (up until HTTP). There was six month period or so ... that the large websites saw their processors spending 90-95% of the cpu running the FINWAIT list (as part of session termination).

The small client/server startup was also seeing (other) scale-up problems in their server platforms used for downloading products (especially browser product download activity) ... and in constant cycle of adding servers. This was before rotating front-ends ... so users were asked to manually specify URL of specific server.

Their problem somewhat cleared up when they installed a large sequent box ... both because of the raw power of the sequent server ... and also because sequent claimed to have addressed the session terminate efficiency sometime previously (related to commercial unix accounts with 20,000 concurrent telnet sessions).

For other topic drift ... I believe the first rotating, load-balancing front-ends was with custom modified software for routers at google.

--
virtualization experience starting Jan1968, online at home since Mar1970

Basic question about CPU instructions

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Basic question about CPU instructions
Newsgroups: bit.listserv.ibm-main
Date: 13 Aug 2010 15:54:16 -0700
Scott.Rowe@JOANN.COM (Scott Rowe) writes:
OK,the 9121 had some CMOS in it, but also still had much Bipolar logic:
http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=212AEDFD169F4B9A8AB5D641C4560917?doi=10.1.1.86.4485&rep=rep1&type=pdf


compares footprint of 9121 air-cooled (announced sep91) with footprint of 4381 air-cooled (announced sep83).

... also mentions mainframe finally getting ESCON at 10MB/s (about the time we were doing FCS for 1Gb/s dual-simplex, aka concurrent 1Gb/s in each direction ... mainframe flavor of FCS with bunch of stuff layered ontop, was eventually announced as FICON). misc. old email from fall of 91 & early 92 related to using FCS for cluster scale-up
https://www.garlic.com/~lynn/lhwemail.html#medusa

also this old post referencing jan92 cluster scale-up meeting in Ellison's conference room
https://www.garlic.com/~lynn/95.html#13

one of the issues in this proposed disk head desgn
https://www.garlic.com/~lynn/2006s.html#email871230

reference in recent post
https://www.garlic.com/~lynn/2010m.html#44 IBM 3883 Manuals

was the 100MB/s or so transfer (16 tracks in parallel) ... 3090 had to do some unnatural acts to connect 100MB/s HiPPI channel interface (lots of problems between MVS unable to support non-CKD ... and mainframe difficulty supporting 100MB/s and higher transfer rates).

the article also mentions use of EVE ... either the engineering verification engine or the endicott verification engine ... depending on who you were talking to. EVE packaging violated standard product floor loading and weight guidelines ... for customer products (but they weren't sold to customers). San Jose got an EVE in the period of the earthquake retrofit of disk engineering bldg. 14 ... while engineering was temporarily housed in an offsite bldg.

The san jose EVE (in offsite bldg) as well as los gatos LSM was used in RIOS chipset design (part of the credit bringing in RIOS chipset a year early went to use of EVE and LSM).

One of my other hobbies was HSDT effort ... with high-speed terrestrial and satellite links
https://www.garlic.com/~lynn/subnetwork.html#hsdt

There was HSDT 7m satellite dish in austin (where RIOS chip design went on, austin had greater rainfade and angle thru atmosphere to the bird in the sky ... eventually announced as power & rs6000) ... and HSDT 4.5m dish in los gatos lab parking lot. That got chip designs between austin and LSM in the los gatos lab. Los Gatos lab had T3 microwave digital radio to roof of bldg. 12 on main plant site ... and then link from bldg. 12 to temporary offsite engineering lab (got rios chip design from austin to the san jose EVE).

--
virtualization experience starting Jan1968, online at home since Mar1970

Is the ATM still the banking industry's single greatest innovation?

Refed: **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 15 Aug, 2010
Subject: Is the ATM still the banking industry's single greatest innovation?
Blog: Payment Systems Network
re:
https://www.garlic.com/~lynn/2010m.html#13 Is the ATM still the banking industry's single greatest innovation?

in '95 there were a number of presentations by consumer dail-up online banking operations (which had been around for nearly a decade) about moving to the internet ... biggest reason was being able to offload significant costs of the proprietary dial-up infrastructures to ISPs (who could amortize it across a number of different application environments/purposes). at the same time, the commercial/business dial-up online banking operations were making presentations that they would never move to the internet because of the significant security and fraud problems (that continue on today).

somewhat analogous to the proprietary dial-up online banking ... there were a number of proprietary VANs (value added networks) that grew up during the 70s & 80s ... all of which were made obsolete by the pervasive growth of the internet (although a few are still trying to hang on).

part of the uptake of the internet in the mid-90s was the appearance of browser technology which greatly simplified moving existing online applications to browser/internet based operation i.e. internet had been around since great cut-over from arpanet to internet on 1jan83 ... although with some number of restrictions on commercial use ... until spread of CIX in early 90s.

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Unleashes 256-core Unix Server, Its Biggest Yet

Refed: **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 03 Aug, 2010
Subject: IBM Unleashes 256-core Unix Server, Its Biggest Yet
Blog: Greater IBM Connection
IBM Unleashes 256-core Unix Server, Its Biggest Yet
http://www.pcworld.com/businesscenter/article/203422/ibm_unleashes_256core_unix_server_its_biggest_yet.html

from above:
IBM has strengthened its hand in the Unix business with new systems based on its Power7 processors, including a server for enterprises that scales to 256 cores.

... snip ...

and ...

IBM unleashes 256-core Unix server, its biggest yet
http://www.computerworld.com/s/article/9180818/IBM_unleashes_256_core_Unix_server_its_biggest_yet
IBM unleashes 256-core Unix server, its biggest yet Hardware
http://www.infoworld.com/d/hardware/ibm-unleashes-256-core-unix-server-its-biggest-yet-135
IBM unleashes 256-core Unix server, its biggest yet
http://www.networkworld.com/news/2010/082410-worried-about-id-theft-join.html

in the 90s, ibm had 256-core (single core/chip) unix server when it bought sequent which had numa-q ... a 256 processor unix server using SCI and intel processors. the equivalent at the time to multicore chips was multiple single core chips on a board.

reference to somewhat related announcement last year:
https://www.garlic.com/~lynn/2009p.html#43 From The Annals of Release No Software Before Its Time
https://www.garlic.com/~lynn/2009p.html#46 From The Annals of Release No Software Before Its Time

and this reference to the start of unix scale-up in 1992
https://www.garlic.com/~lynn/95.html#13

--
virtualization experience starting Jan1968, online at home since Mar1970

z millicode: where does it reside?

Refed: **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: z millicode: where does it reside?
Newsgroups: bit.listserv.ibm-main
Date: 17 Aug 2010 11:33:12 -0700
one of the reasons that SIE instruction was so slow on the 3081 ... was that the service processor had to page some of the microcode in from 3310 FBA disk. things got faster on 3090 with SIE microcode resident ... and a lot more virtualization hardware support ... eventually expanded to PR/SM (on 3090 ... compete with Amdahl hypervisor; basis for current LPAR ... effectively subset of virtual machine function).

misc. old email discussing 3081/3090 SIE issue/differences
https://www.garlic.com/~lynn/2006j.html#email810630

in this post
https://www.garlic.com/~lynn/2006j.html#27 virtual memory

... aka TROUT was code-name for 3090 ...

presumably part of the implementation trade-off for the 3081 was that vmtool (internal high-end virtual machine & used sie) was purely planned on being used for mvx/xa development (and never planned on shipping as product). in that environment there was less of concern of production use of vmtool/sie and therefor less of performance issue (at least involving doing lots of switching between virtual machines and therefor SIE execution happened relatively infrequently).

--
virtualization experience starting Jan1968, online at home since Mar1970

About that "Mighty Fortress"... What's it look like?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 18 Aug, 2010
Subject: About that "Mighty Fortress"...  What's it look like?
MailingList: Cryptography
On 08/17/2010 06:16 PM, David G. Koontz wrote:
Privacy against whom? There were enough details revealed about the key escrow LEAF in Clipper to see that the operation derived from over the air transfer of keys in Type I applications. The purpose was to keep a back door private for use of the government. The escrow mechanism an involution of PKI.

There were of course concerns as evinced in the hearing under the 105th Congress on 'Privacy in the Digital Age: Encryption and Mandatory Access Hearings', before the Subcommittee on the Constitution, Federalism, and Property Rights, of the Committee on The Judiciary, United States Senate in March 1998. These concerns were on the rights of privacy for users.

Clipper failed primarily because there wasn't enough trust that the privacy wouldn't be confined to escrow agents authorized by the Judiciary. The Federal government lost credibility through orchestrated actions by those with conscience concerned over personal privacy and potential government abuse.

Privacy suffers from lack of legislation and is only taken serious when the threat is pervasive and the voters are up in arms.


re:
https://www.garlic.com/~lynn/2010l.html#57 A mighty fortress is our PKI
https://www.garlic.com/~lynn/2010l.html#58 A mighty fortress is our PKI
https://www.garlic.com/~lynn/2010l.html#59 A mighty fortress is our PKI
https://www.garlic.com/~lynn/2010l.html#62 A mighty fortress is our PKI
https://www.garlic.com/~lynn/2010l.html#63 A mighty fortress is our PKI, Part II
https://www.garlic.com/~lynn/2010l.html#64 A mighty fortress is our PKI, Part II
https://www.garlic.com/~lynn/2010l.html#67 A mighty fortress is our PKI, Part II
https://www.garlic.com/~lynn/2010l.html#70 A slight modification of my comments on PKI
https://www.garlic.com/~lynn/2010l.html#71 A slight modification of my comments on PKI
https://www.garlic.com/~lynn/2010l.html#73 A mighty fortress is our PKI, Part II
https://www.garlic.com/~lynn/2010l.html#78 Five Theses on Security Protocols
https://www.garlic.com/~lynn/2010l.html#81 A mighty fortress is our PKI
https://www.garlic.com/~lynn/2010l.html#82 Five Theses on Security Protocols
https://www.garlic.com/~lynn/2010m.html#23 A mighty fortress is our PKI, Part II

we were tangentially involved in the cal. privacy legislation when we were brought in to help wordsmith the cal. electronic signature legislation.
https://www.garlic.com/~lynn/subpubkey.html#signature

some of the organizations were heavily involved in privacy and had done in-depth consumer surveys ... and the number one privacy issue was "identity theft", particularly the kind called "account fraud" ... particularly as a result of data breaches. since there seemed to be little or nothing being done about data breaches i.e. the fraud doesn't occur against the parties responsible for the repositories ... but against the consumers whose information is in the repositories (aka it is much easier to get parties to do something about security when they are both responsible and at risk). In any case, it apparently was hoped that the data breach notifications might motivate the parties holding the data to improve the security around the repositories (this could be considered to be a case of "mis-aligned business process" brought up in the financial mess congressional hearings ... i.e. regulation is much harder when the parties aren't otherwise motivated to do the right thing).

Since the cal. data breach legislation, there have been a number of federal bills introduced on the subject. one group have tended to be similar to the cal. legislation ... but frequently there have been competing bills introduced at the same time that basically are data breach notification ... that pre-empts the cal. legislation and eliminates most notification requirements.

The organizations responsible for the cal. data breach notification legislation where also working on a personal information opt-in bill about the same time ... when GLBA (the legislation that also repealed Glass-Steagall from the 30s and one of the factors in the current mess) added federal pre-emption opt-out ... allowing institutions to share personal information unless there was opt-out request (pre-empting cal. work on requiring that personal information could only be shared when there was the individual's opt-in).

A few years ago at the annual privacy conference in DC, there was a session with panel of FTC commissioners. Somebody in the back of the room got up and claimed to be involved in many of the call-centers for the financial industry. The person claimed that the opt-out call-lines at these centers had no provisions for recording any information about individuals calling in (requesting opt-out of personal information sharing). The person then asked if any of the FTC commissioners might consider looking into the problem (later the whole conference adjourned to the spy museum down the street which had been taken over to celebrate the retirement of one of the FTC commissioners).

Note there was a key escrow business organization (that was supposedly business sponsored). The scenario in the meetings was that there was a disaster/recovery, no-single-point-of-failure, requirement for escrowing keys used to encrypt critical business data.

On the business side ... this would only require escrow of keys used for encrypting business critical data. The govs. participation was that they could have court orders to gain access to the keys. However, there seemed to have been some gov. implicit assumption that the business key escrow facilities would actually be escrowing all encryption keys ... not just the keys used for escrowing business critical data (needed for disaster/recovery & no-single-point-of-failure scenarios)

More recently I was co-author of financial x9.99 privacy standard. Part of this included looking at how HIPAA/medical information might be able to leak in financial statement. However one of the things I grappled with was getting across the concept that protecting individual personal information could require the institution's security dept. to protect the personal information from the activities of the institution (security scenarios more frequently have institution assets being protecting from criminals ... as opposed to individual assets being protected from the institution)

the biggest issues in all the privacy surveys and data breach notification stuff was that criminals being able to use the leaked information for performing fraudulent financial transactions against the consumer's account (sort of a class of replay-attack; number 2 issue was institutions sharing personal information). One of the things that we had done earlier in the x9.59 financial standard was to slightly tweak the paradigm and eliminate the replay-attack vulnerability. X9.59 did nothing to "hide" the information ... it just eliminated crooks being able to use the information (leaked from evesdropping, skimming, data breaches, etc) for fraudulent transactions (x9.59 eliminating the mis-aligned business process where institutions needed to hide consumer information ... where the information leaking represented no <direct> risk to the institution).
https://www.garlic.com/~lynn/x959.html#x959

--
virtualization experience starting Jan1968, online at home since Mar1970

Has there been a change in US banking regulations recently

Refed: **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 18 Aug, 2010
Subject: Has there been a change in US banking regulations recently
MailingList: Cryptography
On 08/17/2010 09:52 PM, James A. Donald wrote:
For sufficiently strong security, ECC beats factoring, but how strong is sufficiently strong? Do you have any data? At what point is ECC faster for the same security?

re:
https://www.garlic.com/~lynn/2010m.html#56 About that "Mighty Fortress"... What's it look like?

in 90s, one of the scenarios for RSA & hardware tokens was that the tokens had extremely poor random numbers. ec/dsa standard required random number as part of signature. RSA alternative had alternative mechanisms not needing random number capability in the token. Possibly that was one scenario for RSA because could work w/o random number could work both with tokens and non-tokens.

However, RSA was extremely compute intensive in tokens ... even with contact token drawing enormous power ... it still took a long time. One avenue was adding enormous number of circuits to the chip to do RSA computation in parallel. However, that significantly drove up the power requirement ... further limiting implementations to contact based operations (to obtain sufficient power to perform the operations).

Somewhat as a result of having working on what is now frequently called "electronic commerce", in the mid-90s, we were asked to participate in x9a10 financial standard working group ... which had been given the requirement to preserve the integrity of the financial infrastructure for ALL retail payments (debit, credit, stored value, high-value, low-value, contact, contactless, internet, point-of-sale, attended, unattended, transit turnstile, aka ALL)

As part of looking at ALL, was looking at the whole hardware token infrastructure (that could also meet ALL requirement). Meetings with the transit industry had requirement that transaction had to be contactless (i.e. some form of iso14443), work within couple inches of reader, and perform the operation within 1/10th of second.

Much of x9.59 digitally signed transaction was light-weight enuf to (potentially) work within the constraints of transit turnstile requirement (as all well as meet all the x9a10 requirements) given a digital signature technology that had sufficient high level of integrity (& security strength for high-value transaction) but required token implementation that could also meet the transit-turnstile requirement.
https://www.garlic.com/~lynn/x959.html#x959

RSA token solutions had tried to shorten the number of seconds (but never getting to subsecond) by adding circuits & power requirements (precluding transit turnstile) ... pretty much further locking it into contact mode of operation.

So we sent out all the requirements ... and got back some number of responses about how to meet them. Basically (in late 90s) there were tokens that could do ec/dsa within the power requirements of iso14443 and transit-turnstile elapsed time ... given that their random number capability could be beefed up to not put things at risk. This was using relatively off the shelf chip (several hundred thousand circuits) with only minor tweak here and there (not the massive additional circuits that some vendors were adding to token chips attempting to reduce RSA elapsed time ... but also drastically driving up power requirement). We had several wafers of these chips produced (at a security fab) and used several of EAL certification.

One of the issues in EAL certification was that since the crypto was part of the chip ... it was included in the certification (there are some chips out there with higher certification because the get it on the bare bones silicon ... and then add things afterwards like applications & crypto that aren't part of the certification). To get higher than an EAL4+ certification required higher level of evaluation of the ec/dsa implementation ... but during the process, NIST had published and then withdrew ec/dsa evaluation reference.

Also, part of the semi-custom chip was adding some minor tweaks to the silicon which eliminated some number of process steps in the fab ... as well as post-fab token processing ... significantly reducing the overall costs for basic product delivery.
https://www.garlic.com/~lynn/x959.html#aads

There was also preliminary design for a fully custom chip doing all the operations ... first cut was approx. 40,000 circuits (as opposed to several hundred thousand circuits for relatively off-the-shelf, semi-custom chip) ... full ec/dsa being able to handle the x9.59 transaction within the transit turnstile power & elapsed time constraint(power draw per unit time approx. proportional to total number of circuits). The fully custom design further reduced the power draw w/o sacrificing ec/dsa elapsed time.

In the high-value case ... the token had to be relatively high-integrity. In the transit-turnstile requirements it had to be able to do the operation within the very low power constraints of contactless turnstile operation as well as being in very small subsecond elapsed time range (the very low power constraints eliminated the RSA-based approaches that significantly increased the number of circuits powered in parallel as solution to modestly reducing the elapsed time) ... aka there was both a very tight power efficiency requirement as well as a very tight elapsed time requirement for performing the digital signature operation. EC/DSA could be done within the power efficiency constraints, elapsed time constraints, security integrity constraints, but had a requirement for very high integrity random number generation.

--
virtualization experience starting Jan1968, online at home since Mar1970

memes in infosec IV - turn off HTTP, a small step towards "only one mode"

Refed: **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 21 Aug, 2010
Subject: memes in infosec IV - turn off HTTP, a small step towards "only one mode"
Blog: Financial Cryptography
memes in infosec IV - turn off HTTP, a small step towards "only one mode"
https://financialcryptography.com/mt/archives/001265.html

an original HTTPS deployment requirement was that the end-user understand the relationship between the webserver they thot they were talking to and the corresponding (HTTPS) URL that they supplied. Another requirement was that ALL Certification Authorities selling SSL/HTTPS domain name digital certificates operate at the same (very high) level of integrity.

almost immediately, the original requirement was negated by merchant servers that dropped back to using HTTP for the most of the online experience (because HTTPS cut thruput by 90-95%) and restricting HTTPS use to pay/checkout portion, accessed by a "pay/checkout" button (supplied by the unvalidated website). Clicking on HTTPS URL buttons/fields from unvalidated sources invalidates the original requirement, since it creates a disconnect between the webserver the user thinks they are talking to and the corresponding URL (that is personally known to them). Since then, the use of "click-on" URLs have proliferated widely resulting in users having little awareness of the corresponding URL. The issue/short-coming is that browser HTTPS only validates the equivalence between the webserver being talking and the supplied URL (it does nothing to establish any equivalence between the supplied URL and what the end-user believes that may URL represent).

n this environment, nearly anybody can buy a SSL domain name digital certificate for a "front" website, induce the end-user to "click-on" a field that claims to be some other website (which supplies their HTTPS URL to the end-user browser), and perform a MITM-attack with a modified PROXY server that establises a (separate) HTTPS connection to claimed website. Their are a pair of HTTPS sessions with the fraudulent website in the middle (MITM-attack) with the modified PROXY providing the interchange between the two sessions (transparent to most end-users).

With the proliferation of zombie machines and botnets, their could be a sequence of paired sessions, so the valid website isn't seeing a large number of different sessions originating from the same IP address.

Of course, with the progress of zombie machine compromises, the attack can also be performed with a compromise of the end-user's browser (eliminating any requirement for intermediate PROXY server). The latest sophistication of such compromises target very specific online banking services ... performing fraudulent transactions and modifying the results presented to the user (so that the fraudulent transactions aren't displayed).

The compromise of the end-user machine was well recognized and researched threat in the 90s and contributed to the EU FINREAD standard in the later 90s. The EU FINREAD standard basically added a separate hardened box as the end-point for financial operations, with its own display and input. Each financial operation was (accurately) displayed on the box and required explicit human interaction (eliminating transactions w/o the user's knowledge or transactions different than what the end-user believed).

The EU FINREAD standard ran afoul of some deployments of other add-on "secure" boxes which were done with serial-port interfaces. The difficult consumer problems with add-on serial-port attachments had been well known since the days of dial-up online banking (before the migration to internet). The enormous consumer problems with the serial-port attachments led to quickly spreading opinion in the industry that all add-on secure boxes were impractical for consumers (when it actually was that add-on serial-port boxes were impractical for general market ... which was also major motivation behind creation of USB).

There is little evidence that internet in-transit evesdropping has been any kind of significant threat (HTTPS encrypting information as countermeasure to evesdropping information being transmitted). The major exploits have been end-point attacks of one sort or another.

--
virtualization experience starting Jan1968, online at home since Mar1970

z196 sysplex question

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: z196 sysplex question
To: <ibm-main@bama.ua.edu>
Date: Sun, 22 Aug 2010 08:58:22 -0400
tony@HARMINC.NET (Tony Harminc) writes:
The largest salesman's bonus in IBM history?

there was story they told when I was at boeing (I was brought in to help setup BCS ... being among the first dozen bcs employees) ... about when the first 360s were announced ... boeing had studied the announcement and walked in to the salesman and placed a really big order ... the salesman hardly knew what it was they were talking about. in anycase, the salesman's commission that year was bigger than the CEO's compensation. it is then that the company changed from commission plan to quota ... something like 20% base salary and 80% of the salary dependent on meeting quota (reaching 150% of quota then is ".2 + 1.5*.80 = 1.4*base-salary").

next year, boeing ordered so much additional, that the salesman had reached his quota by end of january (he then supposedly left and started his own computer services company). quota plan was then enhanced so it could be adjusted during the year (if the salesman was exceeding it by any significant margin).

it makes sure that the only windfalls are for the ceo. much later I sponsored col. boyd's (OODA-loop, patterns of conflict, organic design for command and control, etc) briefings at IBM. One of Boyd's biographies mentions he did a tour in command.of.spook base (about the same time I was at boeing), and spook base (largest computer complex in SE asia) was a $2.5B windfall for IBM (at $2.5B, presumably also bigger than the boeing renton datacenter). misc. posts mentioning boyd and/or URLs mentioning boyd
https://www.garlic.com/~lynn/subboyd.html

misc. recent posts mentioning BCS:
https://www.garlic.com/~lynn/2010b.html#61 Source code for s/360 [PUBLIC]
https://www.garlic.com/~lynn/2010c.html#89 Notes on two presentations by Gordon Bell ca. 1998
https://www.garlic.com/~lynn/2010c.html#90 Notes on two presentations by Gordon Bell ca. 1998
https://www.garlic.com/~lynn/2010d.html#29 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2010d.html#76 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
https://www.garlic.com/~lynn/2010e.html#0 LPARs: More or Less?
https://www.garlic.com/~lynn/2010f.html#75 Is Security a Curse for the Cloud Computing Industry?
https://www.garlic.com/~lynn/2010i.html#54 Favourite computer history books?
https://www.garlic.com/~lynn/2010i.html#66 Global CIO: Global Banks Form Consortium To Counter HP, IBM, & Oracle
https://www.garlic.com/~lynn/2010k.html#18 taking down the machine - z9 series
https://www.garlic.com/~lynn/2010l.html#47 C-I-C-S vs KICKS
https://www.garlic.com/~lynn/2010l.html#50 C-I-C-S vs KICKS
https://www.garlic.com/~lynn/2010l.html#51 Mainframe Hacking -- Fact or Fiction
https://www.garlic.com/~lynn/2010l.html#61 Mainframe Slang terms

--
virtualization experience starting Jan1968, online at home since Mar1970

towards https everywhere and strict transport security

From: lynn@garlic.com (Lynn Wheeler)
Date: 22 Aug, 2010
Subject: towards https everywhere and strict transport security
MailingList: Cryptography
On 08/22/2010 06:56 AM, Jakob Schlyter wrote:
There are a lot of work going on in this area, including how to use secure DNS to associate the key that appears in a TLS server's certificate with the intended domain name [1]. Adding HSTS to this mix does make sense and is something that is discussed, e.g. on the keyassure mailing list [2].

There is large vested interest in Certification Authority industry selling SSL domain name certificates. A secure DNS scenario is having a public key registered at the time the domain name is registered ... and then a different kind of TLS ... where the public key can be returned piggy-backed on the "domain name to ip-address mapping" response.

This doesn't have the revenue infrastructure add-on that happened with the Certifcation Authority ... just is bundled as part of the existing DNS infrastructure. I've pontificated for years that it is catch-22 for the Certification Authority industry ... since there are aspects of improving the integrity of the DNS infrastructure i.e. Certification Authority industry is dependent on DNS ... aka The Certification Authority industry has to match the information from the SSL digital certificate applicant with the true owner of the domain name on file with the DNS infrastructure (among other things, requiring digitally signed communication that is authenticated with the onfile public key in the domain name infrastructure is a countermeasure to domain name hijacking ... which then cascades down the trust chain, to hijackers applying for valid SSL domain name certificates).
https://www.garlic.com/~lynn/subpubkey.html#catch22

At 50k foot level, SSL domain name certificates were countermeasures to various perceived shortcomings in DNS integrity ... nearly any kind of improvements in DNS integrity contributes to reducing the motivation for SSL domain name certificates. Significantly improving integrity of DNS would eliminate all motivation for SSL domain name certificates. This would then adversely affect the revenue flow for the Certification Authority industry.

I've also periodically claimed that OCSP appeared to be a (very rube-goldberg) response to my position that digital certificates (appended to every payment transaction) would actually set the state-of-the-art back 30-40 yrs (as opposed to their claims that appended digital certificates would bring payments into the modern era ... that was separate from the issue of the redundant and superfluous digital certificates representing a factor of 100 times payment transaction payload and processing bloat).
https://www.garlic.com/~lynn/subpubkey.html#bloat

Anything that appears to eliminate competition for paid-for SSL digital certificates and/or strengthen the position of Certification Authorities ... might be construed as having an industry profit motivation.
https://www.garlic.com/~lynn/subpubkey.html#sslcerts

--
virtualization experience starting Jan1968, online at home since Mar1970

Idiotic cars driving themselves

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Idiotic cars driving themselves
Newsgroups: alt.folklore.computers
Date: Sun, 22 Aug 2010 19:40:47 -0400
William Hamblen <william.hamblen@earthlink.net> writes:
It is a little complicated. Each state has an excise tax on fuel and a registration tax on vehicles. There also is a federal excise tax. Heavy vehicles have to be registered in the states they operate in. The excise taxes and registration fees are apportioned among the states. Road construction and maintenance costs are paid by the fuel and registration taxes. The political bargain behind the excise tax is that wear and tear on roads tends to be proportional to the amount of fuel burned and the excise tax is a relatively fair way to share the burden.

we've had some past threads that there is negligible wear&tear on roads by anything less than 18wheelers (as well as roads being designed based for expected heavy truck axle-load traffic expected lifetime). straight fuel based tax is effectively an enormous subsidy to heavy trucking industry (since nearly all road construction and maintenance costs are based on heavy truck usage ... with other traffic having negligible effect ... but the road costs are spread across all traffic).

some conjecture if costs were accurately apportioned ... fuel tax for road build/maint would be totally eliminated for all but 18-wheeler heavy trucks ... and only charged for heavy truck fuel usage (to recover equivalent total revenue likely would drive heavy truck fuel consumption tax to several tens of dollars per gal.)

old thread with reference to cal state road design based on heavy truck equavalent single axle loads (ESALs) ... The effects on pavement life of passenger cars, pickups, and two-axle trucks are considered to be negligible (url went 404 but lives on at wayback machine):
https://www.garlic.com/~lynn/2002j.html#41 Transportation

misc. other posts discussing subject:
https://www.garlic.com/~lynn/2006g.html#5 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#6 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#10 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#12 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#15 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#19 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#24 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#26 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#32 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#35 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#46 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#48 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#49 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#50 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#51 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#52 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#53 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#54 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#56 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#57 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#59 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#60 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#61 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#62 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#0 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#5 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#6 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#11 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#23 The Pankian Metaphor
https://www.garlic.com/~lynn/2007n.html#97 Loads Weighing Heavily on Roads
https://www.garlic.com/~lynn/2007q.html#21 Horrid thought about Politics, President Bush, and Democrats
https://www.garlic.com/~lynn/2008b.html#55 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008e.html#48 fraying infrastructure
https://www.garlic.com/~lynn/2008k.html#68 Historian predicts the end of 'science superpowers'
https://www.garlic.com/~lynn/2008l.html#25 dollar coins
https://www.garlic.com/~lynn/2008l.html#36 dollar coins
https://www.garlic.com/~lynn/2008l.html#37 dollar coins
https://www.garlic.com/~lynn/2008l.html#54 dollar coins
https://www.garlic.com/~lynn/2008n.html#41 VMware Chief Says the OS Is History

--
virtualization experience starting Jan1968, online at home since Mar1970

Dodd-Frank Act Makes CEO-Worker Pay Gap Subject to Disclosure

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Dodd-Frank Act Makes CEO-Worker Pay Gap Subject to Disclosure
Newsgroups: alt.folklore.computers
Date: Mon, 23 Aug 2010 09:58:11 -0400
Dodd-Frank Act Makes CEO-Worker Pay Gap Subject to Disclosure
http://www.itbusinessedge.com/cm/blogs/bentley/dodd-frank-act-makes-ceo-worker-pay-gap-subject-to-disclosure/?cs=42893

from above:
In the aftermath, shareholders at several companies have demanded and been given a say on executive pay in hopes of preventing such excess. HP, Apple, Microsoft, Cisco and Intel are just a handful of them.

... snip ...

there was report that during the past decade (and financial mess period), the ratio of executive-to-worker compensation had exploded to 400:1 (after having been 20:1 for a long time and 10:1 for most of the rest of the world).

part of this was public companies filing fraudulent financial reports to boost executive pay (even after sarbanes-oxley). GAO possibly thot that SEC wasn't doing anything about fraudulent financial reporting ... GAO started publishing reports about financial reporting that it believed to be fraudulent and/or be in error (in some cases, the filings were later corrected ... but executive bonuses weren't correspondingly adjusted).

There was apparently long list of things that SEC wasn't doing anything about ... which was repeated theme by person testifying that they had tried for decade to get SEC to do something about Madoff. There was also claim that large numbers regularly practiced illegal, naked, short selling and believed there was little or no chance of being caught, since nobody at SEC understand what was going on.

past posts about item that made reference to illegal, naked short selling:
https://www.garlic.com/~lynn/2008q.html#50 Obama, ACORN, subprimes (Re: Spiders)
https://www.garlic.com/~lynn/2008s.html#63 Garbage in, garbage out trampled by Moore's law
https://www.garlic.com/~lynn/2009c.html#67 How to defeat new telemarketing tactic
https://www.garlic.com/~lynn/2009d.html#63 Do bonuses foster unethical conduct?
https://www.garlic.com/~lynn/2009d.html#75 Whistleblowing and reporting fraud
https://www.garlic.com/~lynn/2010c.html#33 Happy DEC-10 Day

past posts mentioning gao reports on financial reports of public companies that (GAO believed) were either in error and/or fraudulent:
https://www.garlic.com/~lynn/2008f.html#96 Bush - place in history
https://www.garlic.com/~lynn/2008k.html#25 IBM's 2Q2008 Earnings
https://www.garlic.com/~lynn/2009b.html#25 The recently revealed excesses of John Thain, the former CEO of Merrill Lynch, while the firm was receiving $25 Billion in TARP funds makes me sick
https://www.garlic.com/~lynn/2009b.html#36 A great article was posted in another BI group: "To H*** with Business Intelligence: 40 Percent of Execs Trust Gut"
https://www.garlic.com/~lynn/2009b.html#48 The blame game is on : A blow to the Audit/Accounting Industry or a lesson learned ???
https://www.garlic.com/~lynn/2009b.html#49 US disaster, debts and bad financial management
https://www.garlic.com/~lynn/2009b.html#52 What has the Global Financial Crisis taught the Nations, it's Governments and Decision Makers, and how should they apply that knowledge to manage risks differently in the future?
https://www.garlic.com/~lynn/2009b.html#53 Credit & Risk Management ... go Simple ?
https://www.garlic.com/~lynn/2009b.html#54 In your opinion, which facts caused the global crise situation?
https://www.garlic.com/~lynn/2009b.html#73 What can we learn from the meltdown?
https://www.garlic.com/~lynn/2009b.html#80 How to defeat new telemarketing tactic
https://www.garlic.com/~lynn/2009c.html#0 Audit II: Two more scary words: Sarbanes-Oxley
https://www.garlic.com/~lynn/2009c.html#20 Decision Making or Instinctive Steering?
https://www.garlic.com/~lynn/2009c.html#29 How to defeat new telemarketing tactic
https://www.garlic.com/~lynn/2009d.html#0 PNC Financial to pay CEO $3 million stock bonus
https://www.garlic.com/~lynn/2009d.html#3 Congress Set to Approve Pay Cap of $500,000
https://www.garlic.com/~lynn/2009d.html#37 NEW SEC (Enforcement) MANUAL, A welcome addition
https://www.garlic.com/~lynn/2009d.html#42 Bernard Madoff Is Jailed After Pleading Guilty -- are there more "Madoff's" out there?
https://www.garlic.com/~lynn/2009d.html#61 Quiz: Evaluate your level of Spreadsheet risk
https://www.garlic.com/~lynn/2009d.html#62 Is Wall Street World's Largest Ponzi Scheme where Madoff is Just a Poster Child?
https://www.garlic.com/~lynn/2009d.html#63 Do bonuses foster unethical conduct?
https://www.garlic.com/~lynn/2009d.html#73 Should Glass-Steagall be reinstated?
https://www.garlic.com/~lynn/2009e.html#36 Architectural Diversity
https://www.garlic.com/~lynn/2009e.html#37 How do you see ethics playing a role in your organizations current or past?
https://www.garlic.com/~lynn/2009f.html#2 CEO pay sinks - Wall Street Journal/Hay Group survey results just released
https://www.garlic.com/~lynn/2009f.html#29 What is the real basis for business mess we are facing today?
https://www.garlic.com/~lynn/2009f.html#51 On whom or what would you place the blame for the sub-prime crisis?
https://www.garlic.com/~lynn/2009g.html#7 Just posted third article about toxic assets in a series on the current financial crisis
https://www.garlic.com/~lynn/2009g.html#33 Treating the Web As an Archive
https://www.garlic.com/~lynn/2009h.html#17 REGULATOR ROLE IN THE LIGHT OF RECENT FINANCIAL SCANDALS
https://www.garlic.com/~lynn/2009i.html#60 In the USA "financial regulator seeks power to curb excess speculation."
https://www.garlic.com/~lynn/2009j.html#12 IBM identity manager goes big on role control
https://www.garlic.com/~lynn/2009j.html#30 An Amazing Document On Madoff Said To Have Been Sent To SEC In 2005
https://www.garlic.com/~lynn/2010h.html#15 The Revolving Door and S.E.C. Enforcement
https://www.garlic.com/~lynn/2010h.html#16 The Revolving Door and S.E.C. Enforcement
https://www.garlic.com/~lynn/2010h.html#67 The Python and the Mongoose: it helps if you know the rules of engagement
https://www.garlic.com/~lynn/2010i.html#84 Idiotic programming style edicts
https://www.garlic.com/~lynn/2010k.html#46 Snow White and the Seven Dwarfs
https://www.garlic.com/~lynn/2010l.html#38 Who is Really to Blame for the Financial Crisis?

past posts mentioning 400:1 compensation explosion
https://www.garlic.com/~lynn/2008i.html#73 Should The CEO Have the Lowest Pay In Senior Management?
https://www.garlic.com/~lynn/2008j.html#24 To: Graymouse -- Ireland and the EU, What in the H... is all this about?
https://www.garlic.com/~lynn/2008j.html#76 lack of information accuracy
https://www.garlic.com/~lynn/2008k.html#71 Cormpany sponsored insurance
https://www.garlic.com/~lynn/2008m.html#25 Taxes
https://www.garlic.com/~lynn/2008m.html#33 Taxes
https://www.garlic.com/~lynn/2008m.html#53 Are family businesses unfair competition?
https://www.garlic.com/~lynn/2008m.html#93 What do you think are the top characteristics of a good/effective leader in an organization? Do you feel these characteristics are learned or innate to an individual?
https://www.garlic.com/~lynn/2008n.html#2 Blinkylights
https://www.garlic.com/~lynn/2008n.html#58 Traditional Approach Won't Take Businesses Far Places
https://www.garlic.com/~lynn/2008q.html#14 realtors (and GM, too!)
https://www.garlic.com/~lynn/2008q.html#17 realtors (and GM, too!)
https://www.garlic.com/~lynn/2008r.html#61 The vanishing CEO bonus
https://www.garlic.com/~lynn/2008s.html#5 Greed - If greed was the cause of the global meltdown then why does the biz community appoint those who so easily succumb to its temptations?
https://www.garlic.com/~lynn/2008s.html#41 Executive pay: time for a trim?
https://www.garlic.com/~lynn/2008s.html#44 Executive pay: time for a trim?
https://www.garlic.com/~lynn/2009.html#50 Greed Is
https://www.garlic.com/~lynn/2009.html#80 Are reckless risks a natural fallout of "excessive" executive compensation ?
https://www.garlic.com/~lynn/2009b.html#25 The recently revealed excesses of John Thain, the former CEO of Merrill Lynch, while the firm was receiving $25 Billion in TARP funds makes me sick
https://www.garlic.com/~lynn/2009b.html#41 The subject is authoritarian tendencies in corporate management, and how they are related to political culture
https://www.garlic.com/~lynn/2009d.html#3 Congress Set to Approve Pay Cap of $500,000
https://www.garlic.com/~lynn/2009e.html#73 Most 'leaders' do not 'lead' and the majority of 'managers' do not 'manage'. Why is this?
https://www.garlic.com/~lynn/2009f.html#2 CEO pay sinks - Wall Street Journal/Hay Group survey results just released
https://www.garlic.com/~lynn/2009g.html#44 What TARP means for the future of executive pay
https://www.garlic.com/~lynn/2009o.html#37 Young Developers Get Old Mainframers' Jobs
https://www.garlic.com/~lynn/2009p.html#48 Opinions on the 'Unix Haters' Handbook
https://www.garlic.com/~lynn/2010d.html#8 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2010e.html#39 Agile Workforce
https://www.garlic.com/~lynn/2010f.html#33 The 2010 Census

--
virtualization experience starting Jan1968, online at home since Mar1970

32nd AADS Patent, 24Aug2010

Refed: **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 24 Aug, 2010
Subject: 32nd AADS Patent, 24Aug2010
Blog: First Data
32nd AADS Patent, 24Aug2010
https://www.garlic.com/~lynn/aadssummary.htm

The original patent work had nearly 50 done and the patent attorneys said that it would be over 100 before everything was finished. Then somebody looked at the cost for filing 100+ patents worldwide and directed that all the claims be packaged into 9 patents for filing. Later the patent office came back and said they were getting tired of enormous patents where the filing fee didn't even cover the cost to read all the claims (and the claims needed repackaging into at least 25 patents)

we were then made offer to move on ... so that subsequent patent activity has been occurring w/o our involvement.

--
virtualization experience starting Jan1968, online at home since Mar1970

How Safe Are Online Financial Transactions?

Refed: **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 24 Aug, 2010
Subject: How Safe Are Online Financial Transactions?
Blog: Financial Crime Risk, Fraud and Security
How Safe Are Online Financial Transactions?
http://www.consumeraffairs.com/news04/2010/08/online_security_fears.html

Breaches have a much higher fraud ROI for the crooks ... number of accounts compromised per amount of effort. We were tangentially involved in the cal. data breach legislation; we had been brought in to help wordsmith the cal. electronic signature legislation and several of the participants were heavily into privacy issues ... having done detailed, in-depth consumer privacy issues.
https://www.garlic.com/~lynn/subpubkey.html#signature

The number one issue was account fraud form of identity theft and major portion was data breaches. It seemed that little or nothing was being done about such breaches ... so the result being the cal. data breach notification (hoping that the publicity might prompt corrective action and countermeasures). some issues associated with breaches:

1) aligned business processes merchants and transaction processors have to protect consumer information. in most security scenarios, there is significant larger motivation to secure assets when exploits result in fraud against the institution (trying to protect the assets). when resulting fraud from exploits is against others (consumers), there is much lower motivation to secure such assets (before breach notification, no direct loss to the merchants and transaction processors).

2) security proportional to risk the value of the transaction information to the merchant is the profit from the transaction ... possibly a couple dollars (and possibly a few cents per acccount transaction to the transaction processor). the value of the same information to the attackers/crooks is the account balance or credit limit ... potentially greater than 100 times more valuable. As a result, the attackers may be able to outspend the defenders by a factor of 100 times.

3) dual-use vulnerability the account transaction information is needed in dozens of business processes at millions of locations around the world (requiring it to be generally available). at the same time, crooks can use the information for fraudulent transactions ... implying that the information in transactions has to be kept confidential and never divulged (not even to merchants where the information is required for normal business processes). This results n diametrically opposing requirements for the same information.

In general, the current paradigm has several "mis-aligned" (security) business processes involving the information used in transactions. in the mid-90s, we were asked to participate in the x9a10 financial standard working group which had been given the requirement to preserve the financial infrastructure for ALL retail payments. The resulting x9.59 financial transaction standard slightly tweaked the paradigm eliminating all the above threats and vulnerabilities (including waitresses copying the information).
https://www.garlic.com/~lynn/x959.html#x959

--
virtualization experience starting Jan1968, online at home since Mar1970

How Safe Are Online Financial Transactions?

Refed: **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 24 Aug, 2010
Subject: How Safe Are Online Financial Transactions?
Blog: Financial Crime Risk, Fraud and Security
re:
https://www.garlic.com/~lynn/2010m.html#64 How Safe Are Online Financial Transactions?

... and

Worried About ID Theft? Join the Club; Two-thirds of Americans fret about online security but it doesn't stop them from catching viruses
http://www.networkworld.com/news/2010/093010-feds-hit-zeus-group-but.html

and ...

Hackers bait Zeus botnet trap with dead celeb tales; Massive 'die-off' of celebrities in plane and car crashes tries to trick users into infecting themselves with malware
http://www.computerworld.com/s/article/9181666/Hackers_bait_Zeus_botnet_trap_with_dead_celeb_tales

x9.59 financial transaction standard introduced unique information for each transaction (that couldn't be reused by crooks for performing fraudulent financial transactions) which could be packaged in the form of a hardware token (unique, physical something you have authentication).

About the same time as the x9.59 (and x9a10 financial work) in the mid-90s, there was detailed looks at PC compromises ... virus &/or trojans infecting a machine that could impersonate a real live human (including keystrokes and/or mouse actions). This was somewhat the scenario from the mid-90s that the commercial/business dialup online banking operations claiming that they would NEVER move to the internet. The most recent genre of such compromises have detailed sophisticated knowledge regarding online banking operations, can perform fraudulent financial transactions (impersonating the real live human) and modify display of online banking transactions to hide evidence that the fraudulent transactions have occurred.

The EU FINREAD standard was developed in the 90s as countermeasure to compromised PCs (virus/trojan that could impersonate the owner's physical operations) ... basically a hardened external box that required real human physical interaction for each transaction (eliminating ability for virus/trojan impersonations) and an independent display (eliminating virus/trojan displaying a transaction different than the one to be executed).
https://www.garlic.com/~lynn/subintegrity.html#finread

--
virtualization experience starting Jan1968, online at home since Mar1970

Win 3.11 on Broadband

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Win 3.11 on Broadband
Newsgroups: alt.folklore.computers
Date: Wed, 25 Aug 2010 06:53:33 -0400
"Joe Morris" <j.c.morris@verizon.net> writes:
I'll see your 300 baud problems and raise you 134.5.

In 1968 my PPOE decided to stick its academic toe into the world of remote terminals connected to a mainframe. Four 2740 (not 2741) terminals were delivered as were the Bell 103A2 modems, but for some reason the IBM 2701 TP controller was delayed (this was long before I was involved in planning so I don't know the reason for the delay.)


I didn't get 2741 at home until mar70 ... before that the only access I had was terminal in the office.

at the univ, started out with 2741s ... had been delivered as part of 360/67 planning on use with tss/360. that never really materialized because of all sorts of problems with tss/360. there was 2702 telecommunication controller. some people from science center came out and installed cp67 in jan68 and I got to start playing with it on weekends ... never really got to the place were it was general production operation. when the univ. got some teletypes ... I had to add tty terminal support to cp67.

cp67 came with 1052 & 2741 terminal support and had some tricky code that attempted various terminal sequences to dynamically determine terminal type. the 2702 implemented a specific line-scanner for each terminal type and had a "SAD" command that could dynamically switch which line-scanners were associated with each port (set the port to one line-scanner, do some sequences and see if there was no error, otherwise reset the line-scanner and try again). I did hack to cp67 for tty/ascii support that extended the process to tty/ascii terminals (three different kinds of line-scanners & terminal types to try).

there was a telco box installed that had a whole bunch of in-coming dialup lines and the box could be configured that an incoming call on a busy line, the box would hunt for different available free line (fading memory, i think "hunt" group?). The result was a single number could be published for dialup access ... and same number could be used for pool of incoming lines.

Initial objective was to have a single incoming phone number published for all terminal types (single pool/hunt group?), relying on the dynamic terminal type identification code. However, the 2702 implementation was quite so robust ... while any line-scanner could be dynamically associated with port ... they had taken a short-cut and hardwired line-speed oscillator to each port (would dynamically switch line-scanner for each port but not the line-speed ... and there was a direct physical wire between telco box for each phone number and the 2702 ports). As a result, it required a different pool of numbers/lines (and published dial-in phone number) for 1052/2741 and tty/ascii that corresponded to the different port line-speeds on the 2702.

somewhat as result, the univ. started a project to build clone telecommunication controller ... started with an interdata/3, reverse engineer the 360 channel interface and building a channel interface board for the interdata/3. The interdata/3 would emulate 2702 functions with the added enhancement that terminal line-speed determination was in software. This evolved into an interdata/4 (for the mainframe channel interface) with a cluster of interdata/3s dedicated for port interfaces. Interdata took the implementation and sold it as a product (and four of us got written up as responsible for clone controller business). Later Perkin/Elmer bought Interdata and the box was sold under the Perkin/Elmer name. In the late 90s, I was visiting a large merchant acquiring datacenter (i.e. handled incoming credit card transactions from point-of-sale terminals for a significant percent of the merchants in the US) and they had one such perkin/elmer box handling incoming point-of-sale terminal calls.

misc. past posts mentioning clone controller
https://www.garlic.com/~lynn/submain.html#360pcm

I haven't yet found any pictures of the 2741 at home ... but do have some later pictures of 300baud cdi miniterm that replaced the 2741 (although the following does have pictures of 2741 APL typeball that I kept):
https://www.garlic.com/~lynn/lhwemail.html#oldpicts

--
virtualization experience starting Jan1968, online at home since Mar1970

Idiotic programming style edicts

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Idiotic programming style edicts
Newsgroups: alt.folklore.computers
Date: Wed, 25 Aug 2010 07:44:42 -0400
"Keith F. Lynch" <kfl@KeithLynch.net> writes:
A policy of bailing out firms too big to fail harms small businesses. Especially since the tax money used for those bailouts partly comes from small businesses.

there was some theory that allowing companies to become too big to fail enabled them to become more competitive & efficient. eventually that policy allowed much of the country's economy to be channeled thru such operations. when the bubble burst and the mess hit ... letting those companies fail would have had significant immediate pain & adjustment ... but possibly not as much (long-term) as letting them continue to operate.

i've mentioned in the past looking at a periodic financial industry publication that presented hundreds of pages of operating numbers arrainged in two columns ... one column for the avg numbers of the largest national banks compared to column for the avg numbers of the largest regional banks. the largest regional banks were actually slightly more efficient in number of things compared to the largest national banks ... an indication that the justification allowing too big to fail institutions was not valid. Other indications are that they have used their too big to fail status to perpetrate inefficient and possibly corrupt operation.

this is somewhat related to recent post about explosion in the ratio of executive to employee compensation exploding to 400:1 (after having been 20:1 for a long time and 10:1 in most of the rest of the world).
https://www.garlic.com/~lynn/2010m.html#62 Dodd-Frank Act Makes CEO-Worker Pay Gap Subject to Disclosure

past posts mentioning the regional/national financial institution comparison:
https://www.garlic.com/~lynn/2007e.html#65 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2008p.html#25 How do group members think the US payments business will evolve over the next 3 years?
https://www.garlic.com/~lynn/2009g.html#59 We Can't Subsidize the Banks Forever
https://www.garlic.com/~lynn/2010f.html#51 The 2010 Census
https://www.garlic.com/~lynn/2010h.html#52 Our Pecora Moment
https://www.garlic.com/~lynn/2010i.html#16 Fake debate: The Senate will not vote on big banks
https://www.garlic.com/~lynn/2010i.html#21 Fake debate: The Senate will not vote on big banks
https://www.garlic.com/~lynn/2010k.html#43 Snow White and the Seven Dwarfs

--
virtualization experience starting Jan1968, online at home since Mar1970

Idiotic programming style edicts

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Idiotic programming style edicts
Newsgroups: alt.folklore.computers
Date: Wed, 25 Aug 2010 08:18:09 -0400
re:
https://www.garlic.com/~lynn/2010m.html#67 Idiotic programming style edicts

recent reference to feds following money trail, used to buy drug smuggling planes, back to some too big to fail banks. apparently because they were too big to fail (feds leaning over backwards to do everything to keep them operating), rather than prosecuting, throwing the executives in jail and shutting down the institutions ... they asked the institutions to stop the illegal money laundering.
https://www.garlic.com/~lynn/2010m.html#24 Little-Noted, Prepaid Rules Would Cover Non-Banks As Wells As Banks

--
virtualization experience starting Jan1968, online at home since Mar1970

z/VM LISTSERV Query

From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: z/VM LISTSERV Query
Newsgroups: bit.listserv.ibm-main
Date: 25 Aug 2010 12:47:30 -0700
gahenke@GMAIL.COM (George Henke) writes:
Does anyone know a good LISTSERV for z/VM?

do you want one that runs on z/VM or one about z/VM?

the original was implementated on VM in the mid-80s as part of EARN (european flavor of bitnet) ... aka the "bit.listerv" part of usenet are the original bitnet/earn mailing lists gatewayed to internet. misc. past posts mentioning bitnet/earn
https://www.garlic.com/~lynn/subnetwork.html#bitnet

listserv was sort of a subset of the internal TOOLSRUN that ran on VM internally from the early 80s ... somewhat outcome of corporate executives becoming aware of online computer conferencing on the internal network ... something that I got blamed for in the late 70s and early 80s. ... misc. past posts mentioning internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

old email from somebody that I had worked with ... announcing they had taken position in Paris responsible for establishing EARN.
https://www.garlic.com/~lynn/2001h.html#email840320

listserv history page (at vendor of listserv software):
http://www.lsoft.com/products/listserv-history.asp

their product page:
http://www.lsoft.com/products/products.asp

for a mailing list ABOUT z/VM ... LISTSERV@LISTSERV.UARK.EDU

the actual mailing list: IBMVM@LISTSERV.UARK.EDU

--
virtualization experience starting Jan1968, online at home since Mar1970

towards https everywhere and strict transport security (was: Has there been a change in US banking regulations recently?)

Refed: **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 25 Aug, 2010
Subject: Re: towards https everywhere and strict transport security (was: Has there been a change in US banking regulations recently?)
MailingList: Cryptography
On 08/25/2010 09:04 AM, Richard Salz wrote:
Also, note that HSTS is presently specific to HTTP. One could imagine expressing a more generic "STS" policy for an entire site

A really knowledgeable net-head told me the other day that the problem with SSL/TLS is that it has too many round-trips. In fact, the RTT costs are now more prohibitive than the crypto costs. I was quite surprised to hear this; he was stunned to find it out.

Look at the "tlsnextprotonec" IETF draft, the Google involvement in SPDY, and perhaps this message as a jumping-off point for both:
http://web.archiveorange.com/archive/v/c2Jaqz6aELyC8Ec4SrLY

I was happy to see that the interest is in piggy-backing, not in changing SSL/TLS.


the work on HSP (high-speed protocol) in the late 80s was to do reliable transmission in minimum 3-packet exchange; compared to 5-packet minimum for VMTP (rfc1045) and 7-packet minimum for tcp
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

isclaimer, i was on related technical advisery board for HSP ... while at IBM ... over strong objections from the communication division; they also strongo6 protested that we had come up with 3-tier architecture and were out pitching it to customer executives ... at a time when they were attempting to get the client/server genie back into the terminal emulation bottle
https://www.garlic.com/~lynn/subnetwork.html#3tier
and
https://www.garlic.com/~lynn/subnetwork.html#emulation

then SSL theoretically being stateless on top of tcp added a whole bunch of additional chatter. there has frequently between changing trade-offs between transmission and processing ... but SSL started out being excessive in both transmission and processing (in addition to having deployment requirement that the user understand the relationship between the website they believed they were talking to and the URL they had to supply to the browser .... a requirement that was almost immediately violated).

my pitch forever has been to leverage key distribution piggy-backed on domain name to ip-address (dns) response ... and use that to do encrypted/validated reliable transaction within HSP 3-packet minimum exchange.

as previously mentioned, somewhere back behind everything else
https://www.garlic.com/~lynn/2010m.html#60 towards https everywhere and strict transport security

... there is strong financial motivation in the sale of the SSL domain name digital certificates.
https://www.garlic.com/~lynn/subnetwork.html#3tier

--
virtualization experience starting Jan1968, online at home since Mar1970

Win 3.11 on Broadband

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Win 3.11 on Broadband
Newsgroups: alt.folklore.computers
Date: Wed, 25 Aug 2010 19:53:02 -0400
"Joe Morris" <j.c.morris@verizon.net> writes:
"Hunt group" is the correct term, but I doubt that it was implemented in CPE at the data center. It would more likely have been implemented in the CO unless the customer had its own telephone system, in which case it would have been implemented in the corporate telephone switch.

Of course, with more recent technology it's possible even on small customer premises to do all sorts of things today that once were done only by Ma Bell in her kitchen with big switchframes and big $$$. (Asterisk, anyone?)


re:
https://www.garlic.com/~lynn/2010m.html#66 Win 3.11 on Broadband

rack size box ... front panel with white button for each line (that could light up). i'm sure it was a telco box ... but can't remember the identifier.

--
virtualization experience starting Jan1968, online at home since Mar1970

Idiotic programming style edicts

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Idiotic programming style edicts
Newsgroups: alt.folklore.computers
Date: Thu, 26 Aug 2010 08:36:44 -0400
Andrew Swallow <am.swallow@btopenworld.com> writes:
So this is a human problem. Human problems can be solved by removing the problem humans.

When a too big to fail bank hits problems the government can fire the directors. Any shadow directors who may be hiding some where should also be from frog marched out of the building. Recruit some replacement directors and instruct them to sort out the problems. Warn the recruits and that if they fail they will be frog marched out.


re:
https://www.garlic.com/~lynn/2010m.html#67 Idiotic programming style edicts
https://www.garlic.com/~lynn/2010m.html#68 Idiotic programming style edicts

In the fall2008 congressional hearings into the financial mess the term mis-aligned business process came up several times ... particularly in reference to the rating agencies. The issue was that with the sellers paying for the ratings on the toxic CDOs ... the people at the rating agencies were incented to give the triple-A ratings, asked for by the sellers (in effect, the sellers were able to buy whatever ratings they wanted).

this played a significant role in the financial mess since it provided nearly unlimited funds to the unregulated loan originators ... and allowed them to unload all loans ... regardless of risk, borrowers qualifications and/or loan quality ... at premium price (something that might have been a tens of billion dollar problem ballooned to tens of trillion dollar problem ... brings in the recent reference that magnitude can matter).

it reminds me of a industrial espionage, trade-secret theft case involving the disk division circa 1980. the claim was for billions of dollars (basically difference in clone manufacturer being able to have clone ready to ship on day of announcement ... or 6+ month delay that it would require to come up with a clone thru reverse engineering process).

judge made some ruling about security proportional to risk ... any information valued at billions of dollars ... the company had to demonstrate processes in place to deter employees from walking away with such information (security processes that were in proportion to the value of the information). basically people can't be assumed to be honest in the face of temptation of that magnitude (baby proof the environment ... somewhat analogous to requirement for fences around swimming pools ... since children can't be held responsible for wanting to drown themselves; given sufficient temptation, adults can't be held responsible for wanting to steal something).

in any case, the financial mess scenario subtheme was that it is much easier to regulate and provide effective control when the business processes are "aligned" and people are motivated to do the right thing. Regulation and control can become nearly impossible when the business processes are mis-aligned and people have strong motivations to do the wrong thing.

past references to the industrial espionage case with reference to the analogy about fences around swimming pools:
https://www.garlic.com/~lynn/2002d.html#8 Security Proportional to Risk (was: IBM Mainframe at home)
https://www.garlic.com/~lynn/2005f.html#60 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005r.html#7 DDJ Article on "Secure" Dongle
https://www.garlic.com/~lynn/2006q.html#36 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006r.html#29 Intel abandons USEnet news
https://www.garlic.com/~lynn/2008.html#25 Tap and faucet and spellcheckers
https://www.garlic.com/~lynn/2008.html#26 Tap and faucet and spellcheckers
https://www.garlic.com/~lynn/2008s.html#5 Greed - If greed was the cause of the global meltdown then why does the biz community appoint those who so easily succumb to its temptations?
https://www.garlic.com/~lynn/2008s.html#24 Garbage in, garbage out trampled by Moore's law
https://www.garlic.com/~lynn/2009.html#4 Is SUN going to become x86'ed ??
https://www.garlic.com/~lynn/2009.html#71 CROOKS and NANNIES: what would Boyd do?
https://www.garlic.com/~lynn/2009e.html#82 Architectural Diversity
https://www.garlic.com/~lynn/2009q.html#71 Trade Secrets and Confidential Information

--
virtualization experience starting Jan1968, online at home since Mar1970

Idiotic programming style edicts

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Idiotic programming style edicts
Newsgroups: alt.folklore.computers
Date: Thu, 26 Aug 2010 09:07:58 -0400
Andrew Swallow <am.swallow@btopenworld.com> writes:
So this is a human problem. Human problems can be solved by removing the problem humans.

When a too big to fail bank hits problems the government can fire the directors. Any shadow directors who may be hiding some where should also be from frog marched out of the building. Recruit some replacement directors and instruct them to sort out the problems. Warn the recruits and that if they fail they will be frog marched out.


re:
https://www.garlic.com/~lynn/2010m.html#67 Idiotic programming style edicts
https://www.garlic.com/~lynn/2010m.html#68 Idiotic programming style edicts
https://www.garlic.com/~lynn/2010m.html#72 Idiotic programming style edicts

this also came up in the economists discussion about congress being the most corrupt institution on earth ... and change to "flat rate" tax could go a long way to correcting the situation.

the scenario is that a majority of lobbying and associated huge sums of money are related to getting special tax breaks. all that goes away with flat-rate.

the other points they made ...

was the current infrastructure has resulted in 65,000+ page tax code ... and dealing with that level of complexity costs the country between 3 and 6 percent of GDP (i.e. productivity lost to dealing with complexity of all the tax code provisions). the claim is that "flat rate" tax would reduce that to 400-500 pages (freeing up the enormous resources currently involved in dealing with tax code for more productive activities)

also the current infrastructure results in businesses making non-optimal business decisions ... reducing the competitive position of the country's businesses.

... besides eliminating the motivation behind the majority of existing corruption and lobbying that goes on.

misc. past posts mentioning the flat-rate tax code justification and/or the most corrupt institution on earth:
https://www.garlic.com/~lynn/2008k.html#71 Cormpany sponsored insurance
https://www.garlic.com/~lynn/2008m.html#49 Taxes
https://www.garlic.com/~lynn/2009e.html#43 Architectural Diversity
https://www.garlic.com/~lynn/2009e.html#83 Architectural Diversity
https://www.garlic.com/~lynn/2009h.html#20 China's yuan 'set to usurp US dollar' as world's reserve currency
https://www.garlic.com/~lynn/2009p.html#31 Opinions on the 'Unix Haters' Handbook
https://www.garlic.com/~lynn/2010d.html#48 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2010d.html#49 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2010f.html#40 F.B.I. Faces New Setback in Computer Overhaul
https://www.garlic.com/~lynn/2010j.html#88 taking down the machine - z9 series
https://www.garlic.com/~lynn/2010k.html#36 taking down the machine - z9 series
https://www.garlic.com/~lynn/2010k.html#37 taking down the machine - z9 series
https://www.garlic.com/~lynn/2010k.html#58 History--automated payroll processing by other than a computer?
https://www.garlic.com/~lynn/2010k.html#63 History--automated payroll processing by other than a computer?
https://www.garlic.com/~lynn/2010l.html#69 Who is Really to Blame for the Financial Crisis?

--
virtualization experience starting Jan1968, online at home since Mar1970

z millicode: where does it reside?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: z millicode: where does it reside?
Newsgroups: bit.listserv.ibm-main
Date: Thu, 26 Aug 2010 12:06:50 -0400
shmuel+ibm-main@PATRIOT.NET (Shmuel Metz , Seymour J.) writes:
The term microcoded is normally used when the simulated architecture is different from the underlying architecture, e.g., 108-bit wide control words simulating the 370 instruction set on 1 3168.

re:
https://www.garlic.com/~lynn/2010m.html#55 z millicode: where does it reside?

the high-end had horizontal microcode ... i.e. wide-words with various bits for doing various operations. coders needed to be aware of things like machine cycles to fetch an operand ... since starting fetch was different than using operand. made for higher level of concurrency under programmer control ... but there was much larger number of things that the programmer had to be keep track of (making programming & development much more difficult). efficiency of horizontal microcode was usually measured in avg. machine cycles per 370 instruction (while it was still greater than one ... with a single horizontal microcode instruction being executed per machine cycle ... potentially doing several things simultaneously ... but also potentially just idle ... while it waited for various things).

370/168-1 had 16kbyte processor cache and avg. 2.1 machine cycles per 370 instruction. 370/168-3 doubled processor cache size to 32kbyte and some microcode work reduced things to 1.6 machine cycles per 370 instruction.

low & mid-range machines were vertical microcode engines ... basically executed instructions not too different than 370 ... avg. approx. 10 microcode instruction executed per 370 instruction. for 370 138/148 "ECPS" microcode performance assist ... high-use "hot-spot" kernel instruction sequences were moved to microcode on nearly one-for-one basis (there is lots of talk about microcode instructions being simpler than 370 ... but for majority of kernel operations ... there tended to be less complex operations with equivalent microcode instructions available) ... resulting in a 10:1 performance improvement (aka normal 360 & 370 low & mid-range needed a one mip microcode engine to get 100kips 370).

while 3033 started out being 168 wiring diagram mapped to 20% faster chips (the chips also had ten times as many circuits/chip ... but the extra circuits started out going unused). during 3033 development various things were tweaked getting thruput up to 1.5times 168 (and also reducing the machine cycles per 370 instruction). one of the issues on high-end machines was that methodologies like ECPS resulted in little benefit (since there wasn't long sequences of 370 instructions being replaced with microcode instructions running ten times faster). In that sense ... the high-end horizontal microcode machines ran 370 much closer to native machine thruput (w/o the 10:1 slowdown seen with vertical microcode ... this is analogous to various mainframe emulators running on intel platforms ... so there wasn't any corresponding speedup with moving from 370 to microcode).

in the 3090 time-frame ... SIE got big speed up compared to 3081 ... since there wasn't enough room for SIE microcode on the 3081 ... and it had to be paged in from 3310/FBA. in that time-frame Amdahl came out with the hypervisor (basically virtual machine subset built as part of the machine, not requiring a vm/370 operating system) which was done in what they called macrocode ... a 370 subset ... that could be executed more efficiently than standard rules for 370 instruction execution. The 3090 eventually responded with PR/SM (basis for current day LPAR) ... but it took much longer and involved much greater development effort since it all had to be done in horizontal microcode.

SIE and other virtual machine assists were different than the 10:1 speedup from ECPS. Basically significant number of privileged instructions were enhanced to recognize 3-modes of execution, normal 370 supervisor mode, normal 370 problem mode (generate privileged interrupt), and 370 virtual machine supervisor mode. Now the execution execution was different in 370 supervisor mode and 370 virtual machine supervisor mode. The speedup didn't come from replacing VM/370 kernel instructions with microcode instructions ... a big part came from eliminating the interrupt into the vm/370 kernel and the associated task switch (and change in cache contents) ... and then subsequent switching back to the virtual machine.

I had given a talk on how we did ECPS in the mid-70s for 138/148 at (vm370) baybunch user group meeting (held monthly at SLAC in Palo alto), which was attended by a number of people from Amdahl (various vendors from the bay area were regulars at baybunch). Later they came back and said that the Amdahl hypervisor enhancements didn't show as much speedup as might be indicated by my (10:1) ECPS talk. However, we had to have the discussions that the machine implementations were totally different (horizontal microcode versus vertical microcode) and the kinds of things done in the hypervisor was different than what was being done in ECPS (i.e. the speedup for ECPS was only possible if there was 10:1 difference between the native microprocessor and 370 ... which was true for the high-end machines ... executing 370 at much closer to native machine).

misc. past posts mentioning (Amdahl) macrocode:
https://www.garlic.com/~lynn/2002p.html#44 Linux paging
https://www.garlic.com/~lynn/2002p.html#48 Linux paging
https://www.garlic.com/~lynn/2003.html#9 Mainframe System Programmer/Administrator market demand?
https://www.garlic.com/~lynn/2003.html#56 Wild hardware idea
https://www.garlic.com/~lynn/2003o.html#52 Virtual Machine Concept
https://www.garlic.com/~lynn/2005d.html#59 Misuse of word "microcode"
https://www.garlic.com/~lynn/2005d.html#60 Misuse of word "microcode"
https://www.garlic.com/~lynn/2005h.html#24 Description of a new old-fashioned programming language
https://www.garlic.com/~lynn/2005p.html#14 Multicores
https://www.garlic.com/~lynn/2005p.html#29 Documentation for the New Instructions for the z9 Processor
https://www.garlic.com/~lynn/2005u.html#40 POWER6 on zSeries?
https://www.garlic.com/~lynn/2005u.html#43 POWER6 on zSeries?
https://www.garlic.com/~lynn/2005u.html#48 POWER6 on zSeries?
https://www.garlic.com/~lynn/2006b.html#38 blast from the past ... macrocode
https://www.garlic.com/~lynn/2006c.html#7 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006c.html#9 Mainframe Jobs Going Away
https://www.garlic.com/~lynn/2006c.html#24 Harvard Vs Von Neumann architecture
https://www.garlic.com/~lynn/2006c.html#40 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006e.html#15 About TLB in lower-level caches
https://www.garlic.com/~lynn/2006h.html#30 The Pankian Metaphor
https://www.garlic.com/~lynn/2006j.html#32 Code density and performance?
https://www.garlic.com/~lynn/2006j.html#35 Code density and performance?
https://www.garlic.com/~lynn/2006m.html#39 Using different storage key's
https://www.garlic.com/~lynn/2006p.html#42 old hypervisor email
https://www.garlic.com/~lynn/2006t.html#14 32 or even 64 registers for x86-64?
https://www.garlic.com/~lynn/2006u.html#33 Assembler question
https://www.garlic.com/~lynn/2006u.html#34 Assembler question
https://www.garlic.com/~lynn/2006v.html#20 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2007b.html#1 How many 36-bit Unix ports in the old days?
https://www.garlic.com/~lynn/2007d.html#3 Has anyone ever used self-modifying microcode? Would it even be useful?
https://www.garlic.com/~lynn/2007d.html#9 Has anyone ever used self-modifying microcode? Would it even be useful?
https://www.garlic.com/~lynn/2007j.html#84 VLIW pre-history
https://www.garlic.com/~lynn/2007k.html#74 Non-Standard Mainframe Language?
https://www.garlic.com/~lynn/2007n.html#96 some questions about System z PR/SM
https://www.garlic.com/~lynn/2007o.html#42 mainframe performance, was Is a RISC chip more expensive?
https://www.garlic.com/~lynn/2008c.html#32 New Opcodes
https://www.garlic.com/~lynn/2008c.html#33 New Opcodes
https://www.garlic.com/~lynn/2008c.html#42 New Opcodes
https://www.garlic.com/~lynn/2008c.html#80 Random thoughts
https://www.garlic.com/~lynn/2008j.html#26 Op codes removed from z/10
https://www.garlic.com/~lynn/2008m.html#22 Future architectures
https://www.garlic.com/~lynn/2008r.html#27 CPU time/instruction table
https://www.garlic.com/~lynn/2009q.html#24 Old datasearches

--
virtualization experience starting Jan1968, online at home since Mar1970

towards https everywhere and strict transport security (was: Has there been a change in US banking regulations recently?)

From: lynn@garlic.com (Lynn Wheeler)
Date: 26 Aug, 2010
Subject: Re: towards https everywhere and strict transport security (was: Has there been a change in US banking regulations recently?)
MailingList: Cryptography
re:
https://www.garlic.com/~lynn/2010m.html#70 towards https everywhere and strict transport security (was: Has there been a change in US banking regulations recently?)

the profit from sale of SSL domain name certs had profit motivation pretty much unrelated to the overall costs to the infrastructure ... and so there was an extremely strong champion.

simply enhancing DNS and doing real-time trusted public key distribution thru a trusted domain name infrastructure ... was all cost with no champion with strong profit motivation.

--
virtualization experience starting Jan1968, online at home since Mar1970

towards https everywhere and strict transport security (was: Has there been a change in US banking regulations recently?)

Refed: **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 26 Aug, 2010
Subject: Re: towards https everywhere and strict transport security (was: Has there been a change in US banking regulations recently?)
MailingList: Cryptography
On 08/25/2010 10:40 PM, James A. Donald wrote:
This is inherent in the layering approach - inherent in our current crypto architecture.

re:
https://www.garlic.com/~lynn/2010m.html#70 towards https everywhere and strict transport security (was: Has there been a change in US banking regulations recently?)
https://www.garlic.com/~lynn/2010m.html#75 towards https everywhere and strict transport security (was: Has there been a change in US banking regulations recently?)

one of the things ran into the (ISO chartered) ANSI X3S3.3 (responsible for standards related to OSI level3 & level4) meetings with regard to standardization of HSP (high speed protocol) ... was that ISO had policy that it wouldn't do standardization on things that violated OSI model.

HSP violated OSI model by (and was turned down by X3S3.3)

1) went directly from level 4/5 interface to the MAC interface (bypassing OSI level 3/4 interface)

2) supported internetworking ... which doesn't exist in OSI model ... would set in non-existing layer between level3 & level4

3) went directly to MAC interface ... which doesn't exist in OSI mdoel ... something that sits approx. in the middle of layer3 (above link layer and includes some amount of network layer).

In the IETF meetings at the time of original SSL/TLS ... my view was that ipsec wasn't gaining tranction because it required replacing parts of tcp/ip kernel stack (upgrading all the kernels in the world was much more expensive then than it is now). That year two things side-stepped the ipsec upfront kernel stack replacement problem

• SSL ... which could be deployed as part of the application w/o requiring changes to existing infrastructure

• VPN ... introduced in gateway sesssion at fall94 IETF meeting. This was implemented in gateway routers w/o requiring any changes to existing endpoints. My perception was that it upset the ipsec until they started referring to VPN as lightweight ipsec (but that opened things for ipsec to be called heavyweight ipsec). There was a problem with two classes of router/gateway vendors ... those with processors that could handle the (VPN) crypto load and those that had processors that couldn't handle the crypto load. One of the vendors that couldn't handle the crypto load went into standards stalling mode and also a month after the IETF meeting announced a VPN product that involved adding hardware link encryptors ... which would then require dedicated links between the two locations (as opposed to tunneling thru the internet).

....

I would contend that various reasons why we are where we are ... include solutions that have champions with profit motivation as well as things like ease of introduction ... and issues with being able to have incremental deployments with minimum disruption to existing facilities (like browser application based solution w/o requiring any changes to established DNS operation).

On the other hand ... when we were brought in to consult with the small client/server startup that wanted to do payment transactions (and had also invented SSL) ... I could mandate multiple A-record support (basically alternative path mechanism) for the webserver to payment gateway TCP/SSL connections. However, it took another year to get their browser to support multiple-A record (even when supplying them with example code from TAHOE 4.3 distribution) ... they started out telling me that multiple-A record technique was "too advanced".

An early example requirement was one of the first large adopters/deployments for e-commerce server, advertized on national sunday football and was expecting big e-commerce business during sunday afternoon halftime. Their e-commerce webserver had redundant links to two different ISPs ... however one of the ISPs had habit of taking equipment down during the day on sunday for maintenance (w/o multiple-A record support, there was large probability that significant percentage of browsers wouldn't be able to connect to the server on some sunday halftime).

--
virtualization experience starting Jan1968, online at home since Mar1970

towards https everywhere and strict transport security (was: Has there been a change in US banking regulations recently?)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 27 Aug, 2010
Subject: Re: towards https everywhere and strict transport security (was: Has there been a change in US banking regulations recently?)
MailingList: Cryptography
re:
https://www.garlic.com/~lynn/2010m.html#70 towards https everywhere and strict transport security (was: Has there been a change in US banking regulations recently?)
https://www.garlic.com/~lynn/2010m.html#75 towards https everywhere and strict transport security (was: Has there been a change in US banking regulations recently?)
https://www.garlic.com/~lynn/2010m.html#76 towards https everywhere and strict transport security (was: Has there been a change in US banking regulations recently?)

On 08/27/2010 12:38 AM, Richard Salz wrote:
(For what it's worth, I find your style of monocase and ellipses so incredibly difficult to read that I usually delete your postings unread.)

It is well studied. I had gotten blamed for online computer conferencing on the internal network in the late 70s and early 80s (rumor is that when the executive committee became aware ... 5of6 wanted to immediately fire me ... supposedly there was only one holdout).

somewhat as a result, there was a researcher paid to sit in the back of my office for nine months, taking notes on how I communicated, face-to-face, telephone, computer ... got copies of all incoming and outgoing email, logs of all instant messages, etc. Besides being a corporate research report, it was also the basis for several papers, books and stanford phd (joint between language and computer AI). One number was that I avg. electronic communication with 275 different people per week for the 9month period. lots of past posts mentioning computer mediated communication
https://www.garlic.com/~lynn/subnetwork.html#cmc

in any case, we were brought in to help wordsmith the cal. state electronic signature legislation. the certification authority industry was heavily lobbying (effectively) that digital certificates had to be mandated for every adult.

The certification authority industry, besides doing the SSL domain name digital certificates were out pitching to wall street money people a $20B/annum business case (basically all adults with $100/annum digital certificate). Initially they appeared to believe that the financial industry would underwrite the certificates. The financial industry couldn't see the justification for the $20B/annum transfer of wealth to the certification authority industry. There were various attempts then to convince consumers that they should pay it directly out of their own pocket. in payment area, they were also pitching to the merchants that part of deploying digital certificates infrastructure, the burden of proof in digitally signed payment transactions, would be switched to consumers (somewhat like UK where approx. that has happened as part of payment hardware tokens).

That netted out to consumers paying $100/annum (for digital certificates), out of their own pocket, for the privilege of having the burden of proof in disputes shifted to them. that didn't sell ... so there was heavy lobbying all around the world wanting gov mandating digital certificates for every adult (payed for by the individual). The lawyers working on the cal. legislation explained why digital signatures didn't meet the criteria for "human signatures" (demonstration of human having read, agreed, authorizes, and/or approved) needed by electronic signature legislation. we got some patents in the area, the 32nd just granted on tuesday, they are all assigned, we have no interest and have been long gone for years.
https://www.garlic.com/~lynn/aadssummary.htm

There are a couple issues with new technology uptake ... much more successful when 1) there is no incumbent technology already in the niche and 2) there are strong champions with profit motivation and 3) there is at least some perceived benefit. In the 90s, I would pontificate how SSL domain name certificates didn't actually provide any significant security ... but were "comfort" certificates (for consumers), aka benefit was significantly a matter of publicity.

Better solutions that come along later don't necessarily win ... having incumbent to deal with and are especially at a disadvantage if there aren't major champions (typically with strong profit motivation).

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Hall of Fame (MHOF)

Refed: **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 27 Aug, 2010
Subject: Mainframe Hall of Fame (MHOF)
Blog: MainframeZone
There was effort in the late 70s to replace the large number of internal microprocessors with 801/risc ... including the low & mid-range 370s would use 801/risc as native microprocessor (emulating 370). The original AS/400 (follow-on to s/38) was to also use 801/risc (as were a large number of other things). The chips ran into problems and the efforts were abandoned for various reasons. The 801/risc chips at the time could have fit the bill ... but they were much more expensive chips and required more expensive hardware support infrastructure ... that could be done in the PC market place at the time. Note that ALL the low-end and mid-range 360s&370s were analogous to Hercules ... in that 370 emulation ran on various kinds of underlying microprocessors. The microcode technology at the time avg. approx. 10 native instructions for every 370 instruction (needing a 5mip native processor in order to achieve .5mip 370 thruput).
https://www.garlic.com/~lynn/submain.html#360mcode

The company did have (an extremely) expensive 68K machine that was being sold by the instrument division.

For slight drift ... old email reference about Evens getting request for 801/risc chips from MIT and he offered them 8100 computers instead:
https://www.garlic.com/~lynn/2003e.html#email790711

other folklore was that Evens had asked my wife to evaluate the 8100 when they were considering whether to keep it going or not; she had turned thumbs down and the product was killed.

various other old email related to 801/risc
https://www.garlic.com/~lynn/lhwemail.html#801

Part of the reason that some amount of the above still survives was that in the late 70s and early 80s, I got blamed for computer conferencing on the internal network (which was larger than the arpanet/internet from just about the beginning until late '85 or early '86). Somewhat as a result there was detailed study of how I communicate ... which included putting in place processes that logged/archived all my electronic communication (incoming/outgoing email, instant messages, etc).
https://www.garlic.com/~lynn/subnetwork.html#cmc

--
virtualization experience starting Jan1968, online at home since Mar1970

Idiotic take on Bush tax cuts expiring

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Idiotic take on Bush tax cuts expiring
Newsgroups: alt.folklore.computers
Date: Sat, 28 Aug 2010 09:47:49 -0400
recent reference/thread
https://www.garlic.com/~lynn/2010f.html#46 not even sort of about The 2010 Census

past couple of years I've frequently made reference to the former comptroller general ... who retired early so he could be outspoken about congress and the budget (had been appointed in the late 90s for a 15 year term); frequently making references to nobody in congress for the last 50 years appeared to be capable of middle school arithmetic.

he was on tv show pushing his book ... and made reference to budget taking a turn for the worst after congressional fiscal responsibility law expired in 2002 ... which appeared to have allowed congress to pass the bill that created the worst budget hit of all ... claim was $40TRILLION in unfunded mandates for partd/drug bill (i.e. worse than everything else combined that has been done before or since).

60mins earlier had a segment on the behind the scenes things that went on getting that bill passed. a major item was slipping in a one sentence (just before the vote) that eliminated competitive bidding for partd drugs. 60mins had drugs available under partd ... and the identical drug/brand available thru VA programs (that were obtained thru competitive bidding) at 1/3rd the cost (i.e. lobbying by drug industry resulted in enormous windfall to the drug industry).

60mins identified the 12-18 major congressmen and staffers responsible for sheparding the bill thru (all members of the party in congressional power in 2003). One of the things was distributing a GAO & congressional budget office analysis w/o the one sentence change ... but managed to sidetrack the distribution of the updated analysis (for the effect of the one sentence change) until after the vote. The followup was that all 12-18 had since resigned their positions and had high paying positions in the drug industry.

past posts mentioning comptroller general:
https://www.garlic.com/~lynn/2006f.html#41 The Pankian Metaphor
https://www.garlic.com/~lynn/2006f.html#44 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#9 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#14 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#27 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#2 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#3 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#4 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#17 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#19 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#33 The Pankian Metaphor
https://www.garlic.com/~lynn/2006o.html#61 Health Care
https://www.garlic.com/~lynn/2006p.html#17 Health Care
https://www.garlic.com/~lynn/2006r.html#0 Cray-1 Anniversary Event - September 21st
https://www.garlic.com/~lynn/2006t.html#26 Universal constants
https://www.garlic.com/~lynn/2007j.html#20 IBM Unionization
https://www.garlic.com/~lynn/2007j.html#91 IBM Unionization
https://www.garlic.com/~lynn/2007k.html#19 Another "migration" from the mainframe
https://www.garlic.com/~lynn/2007o.html#74 Horrid thought about Politics, President Bush, and Democrats
https://www.garlic.com/~lynn/2007p.html#22 U.S. Cedes Top Spot in Global IT Competitiveness
https://www.garlic.com/~lynn/2007q.html#7 what does xp do when system is copying
https://www.garlic.com/~lynn/2007s.html#1 Translation of IBM Basic Assembler to C?
https://www.garlic.com/~lynn/2007t.html#13 Newsweek article--baby boomers and computers
https://www.garlic.com/~lynn/2007t.html#14 Newsweek article--baby boomers and computers
https://www.garlic.com/~lynn/2007t.html#15 Newsweek article--baby boomers and computers
https://www.garlic.com/~lynn/2007t.html#24 Translation of IBM Basic Assembler to C?
https://www.garlic.com/~lynn/2007t.html#25 Newsweek article--baby boomers and computers
https://www.garlic.com/~lynn/2007t.html#33 Newsweek article--baby boomers and computers
https://www.garlic.com/~lynn/2007t.html#35 Newsweek article--baby boomers and computers
https://www.garlic.com/~lynn/2007v.html#26 2007 Year in Review on Mainframes - Interesting
https://www.garlic.com/~lynn/2008.html#57 Computer Science Education: Where Are the Software Engineers of Tomorrow?
https://www.garlic.com/~lynn/2008d.html#40 Computer Science Education: Where Are the Software Engineers of Tomorrow?
https://www.garlic.com/~lynn/2008e.html#50 fraying infrastructure
https://www.garlic.com/~lynn/2008f.html#86 Banks failing to manage IT risk - study
https://www.garlic.com/~lynn/2008g.html#1 The Workplace War for Age and Talent
https://www.garlic.com/~lynn/2008h.html#3 America's Prophet of Fiscal Doom
https://www.garlic.com/~lynn/2008h.html#26 The Return of Ada
https://www.garlic.com/~lynn/2008i.html#98 dollar coins
https://www.garlic.com/~lynn/2008n.html#8 Taxcuts
https://www.garlic.com/~lynn/2008n.html#9 Taxcuts
https://www.garlic.com/~lynn/2008n.html#17 Michigan industry
https://www.garlic.com/~lynn/2009f.html#20 What is the real basis for business mess we are facing today?
https://www.garlic.com/~lynn/2009n.html#55 Hexadecimal Kid - articles from Computerworld wanted
https://www.garlic.com/~lynn/2009p.html#86 Opinions on the 'Unix Haters' Handbook
https://www.garlic.com/~lynn/2009p.html#87 IBM driving mainframe systems programmers into the ground
https://www.garlic.com/~lynn/2010.html#36 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010.html#37 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010b.html#60 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010c.html#3 Oldest Instruction Set still in daily use?
https://www.garlic.com/~lynn/2010c.html#9 Oldest Instruction Set still in daily use?
https://www.garlic.com/~lynn/2010c.html#23 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010f.html#34 The 2010 Census
https://www.garlic.com/~lynn/2010f.html#46 not even sort of about The 2010 Census

--
virtualization experience starting Jan1968, online at home since Mar1970

3270 Emulator Software

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: 3270 Emulator Software
Newsgroups: bit.listserv.ibm-main
Date: 30 Aug 2010 08:44:27 -0700
R.Skorupka@BREMULTIBANK.COM.PL (R.S.) writes:
Some people still use real 3270s. I do, but only for consoles and local (non-SNA) terminals. However I do my regular work on emulator since day 0.

i kept a real 3277 for a long time because the human factors were so much better than 3274/3278. we actually had an argument with kingston about design point of 3274/3278 ... and they eventually came back and said it was never targeted for online, interactive work ... it was purely targeted at data-entry use (i.e. keypunch).

part of the problem was that they moved quite a bit of electronics out of the terminal head and back into the 3274 (reduced manufacturing costs). with the electronics in the 3277 there were some number of "human factors" hacks that could be done to improve operation.

it was possible to do a little soldering inside the 3277 keyboard to adjust the "repeat delay" and the "repeat rate".

the 327x was half-duplex and had nasty habit of locking the keyboard if hitting key at same time a write happened to go to the screen (really terrible interactive characteristics). There was a very small fifo box that was built; unplug the keyboard from inside the 3277 head, plug the fifo box into the head and plug the keyboard into the fifo box. the fifo box would queue pending keystrokes when the screen was being written ... to avoid the keyboard locking problem.

the 3272/3277 (ANR) was also much faster than 3274/3278 (DCA) ... there was a joke that TSO was so non-interactive and so slow ... that TSO users had no idea that the 3274/3278 hardware combination was enormously slower than 3272/3277 (becayse so much terminal head electronics for the 3278 had been moved back into the 3274 ... there was an enormous increase in the DCA controller/terminal head chatter ... that was responsible for much of the significant slow down).

old benchmarks comparing 3272/3277 & 3274/3278
https://www.garlic.com/~lynn/2001m.html#19 3270 protocol

later with simulated PC ... those that were ANR/3277 simulation had much faster download/upload transfers than the DCA/3278 simulation (again because there was so much more controller/terminal head chatter required by DCA over the coax).

prior to PCs and simulated 3270s ... vm provide simulated virtual 3270s over the internal network (late 70s, eventually released as product) and there was an internal developed HLLAPI for simulated keystrokes, logic and screen scraping called PARASITE/STORY. old posts with PARASITE/STORY description and sample STORYs:
https://www.garlic.com/~lynn/2001k.html#35 Newbie TOPS-10 7.03 question
https://www.garlic.com/~lynn/2001k.html#36 Newbie TOPS-10 7.03 question

above includes STORY for automatically logging onto RETAIN for retrieving bug & fix descriptions.

other recent poss mentioning PARASITE/STORY
https://www.garlic.com/~lynn/2008g.html#22 Was CMS multi-tasking?
https://www.garlic.com/~lynn/2009k.html#0 Timeline: The evolution of online communities
https://www.garlic.com/~lynn/2009l.html#43 SNA: conflicting opinions
https://www.garlic.com/~lynn/2009q.html#4 Arpanet

for other topic drift ... circa 1980, STL was bursting at the seams and they decided to move 300 people from the IMS group to an offsite building (approx. ten miles away) and let them remote back into the computers at the datacenter.

They tried SNA remote 3270 support, but the IMS group found the operation extremely dismal (compared to the local 3270 vm370 support that they were used to inside STL ... even tho they were 3274/3278). Eventually it was decided to deploy NSC HYPERchannel "channel extenders" with local 3274/3278 ("channel attached") terminals at the remote site. For the fun of it, I wrote the software driver for the HYPERchannel box ... basically scanned the channel program ... created a simplified version which was download (over HYPERchannel network) to HYPERchannel A51x remote device adatper (aka channel emulator box at the remote building).

There was no noticable difference in terminal response at the remote building. However, there was an interesting side-effect at the STL datacenter ... with overall system thruput improving 10-15%. Turns out with the mainframes (168, 3033), installations were use to spreading the 3274 controllers over the same channels with disk controllers. The problem turned out that the 3274s were also extremely slow on the channel side (besides being slow on the terminal head side) ... with very high channel busy ... even for the simplest of operations. The HYPERchannel boxes that directly attached to (real) mainframe channels were significantly more efficient than the 3274s ... for the identical operations (the enormous 3274 channel busy time had been moved to the simulated channel HYPERchannel A51x boxes at the remote site) ... which resulted in significantly reduced contention with disk operations and the overall 10-15% increased thruput. misc. old posts mentioning various things ... including HYPERchannel work
https://www.garlic.com/~lynn/subnetwork.html#hsdt

for other IMS drift ... old reference that when Jim left for Tandem ... he was palming off some number of things for me ... including database consulting with the IMS group ... and talking to customers about relational databases:
https://www.garlic.com/~lynn/2007.html#email801006
https://www.garlic.com/~lynn/2007.html#email801016

in this post
https://www.garlic.com/~lynn/2007.html#1

following post in the same thread ... comment about rewriting IOS for disk engineering so it would never fail (related to them attempting to use MVS and finding it had a 15min MTBF in their environment)
https://www.garlic.com/~lynn/2007.html#2

above includes this old email
https://www.garlic.com/~lynn/2007.html#email801015

about initial MVS regression tests with injected 3380 errors, MVS required re-IPL in all the casess (and in 2/3rds of the cases, no indication what the problem was). misc. past posts mentioning getting to play disk engineer in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

Nostalgia

Refed: **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Nostalgia
Newsgroups: bit.listserv.ibm-main
Date: 1 Sep 2010 04:49:56 -0700
Robert.Richards@OPM.GOV (Richards, Robert B.) writes:
This thread just proves most of us are getting or already are... *old*! :-)

How about the commercials?

Ovaltine, N-E-S-T-L-E-S, 20 Mule Team Borax, Brylcreem (A Little Dab'll Do Ya!), Ipana Toothpaste with Bucky Beaver (Brusha... Brusha... Brusha. Get the New Ipana - it's dandy for your teeth!), LSMFT, Show us your Lark.

George Carlin had a whole routine on commercial slogans. Hilarious!


I had done dynamic adaptive resource management while undergraduate in the 60s ... and the company included it for cp67 distribution.

then in the early 70s, the cp67->vm370 morph had a lot of simplification and most of it was dropped ... which was followed periodically by lots of pressure from SHARE to have me re-release it for vm370.

then there was the FS period that totally consumed most of the corporations attention ... and 370 hardware & software product pipelines were allowed to dry up. when FS was finally killed, there was mad rush to get products back into the 370 pipeline
https://www.garlic.com/~lynn/submain.html#futuresys

I had converted a bunch of stuff from cp67 and vm370 and doing product distribution for large number of internal datacenters (and making less then complimentary comments about reality of the FS effort) ... some old email
https://www.garlic.com/~lynn/2006v.html#email731212
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

a combination of SHARE pressure and mad rush to get stuff back into product pipeline was enough to overcome development group NIH ... I was told to create the resource manager ... which was also selected to be the guinea pig for starting to charge for kernel software (and i had to spend lots of time with various parties working on policies for kernel software charging) ... some past posts about starting to charge for application software with the 23jun69 unbundling announcement (but initially, kernel software was still free)
https://www.garlic.com/~lynn/submain.html#unbundle

Somebody from corporate reviewed the resource manager specs and ask where all the tuning parameters were (the favorite son operating system in POK was doing a resource manager with an enormous number of manual tuning parameters). The comment was all "modern" operating systems had enormous numbers of manual tuning knobs for use by customer specialists ... and my resource manager couldn't be released w/o having some manual tuning parameters. I tried to explain that a major point of dynamic adptive resource manager ... was that the resource management did all the "tuning" ... doing all the work dynamically adapting the system to different configurations and workloads ... but it fell on deaf ears.

So I was forced to add some number of manual tuning modules and placed them in a module call SRM ... and all the dynamic adaptive stuff went into a module called STP (after TV commercial punch line for a product associated with muscle cars of the 60s ... The Racer's Edge).

I published the detailed description of the operations (including the components in SRM), how they operated and also published the code (which was also included as part of the standard source distribution & maintenance; later I was told, the details were even taught in some univ. courses). However, there was a joke related to the nature of dynamic adaptive and feedback control operations ... given that the "degrees of freedom" afforded the (SRM) manual tuning knobs were less than the "degrees of freedom" allowed the dynamic adaptive mechanism, over the same components (i.e. the dynamic adaptive nature could more than compensate for any manual changes that might be made).
https://www.garlic.com/~lynn/subtopic.html#fairshare

--
virtualization experience starting Jan1968, online at home since Mar1970

Nearly $1,000,000 stolen electronically from the University of Virginia

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 01 Sep, 2010
Subject: Nearly $1,000,000 stolen electronically from the University of Virginia
MailingList: Cryptography
On 09/01/2010 01:39 PM, Perry E. Metzger wrote:
Hardly the first time such things have happened, but it does focus the mind on what the threats are like.

http://krebsonsecurity.com/2010/09/cyber-thieves-steal-nearly-1000000-from-university-of-virginia-college/


In the mid-90s, dialup consumer online banking gave pitches on motivation for moving to the internet (major justification was the significant cost in supporting proprietary dialup infrastructure ... including all the issues with supporting serial-port modems; one such operation claimed library of over 60 different drivers for various combinations of customer PCs, operating systems, operating system levels, modems, etc).

At the same time, the dialup business/commercial online cash-management operations were pitching why they would NEVER move to the internet ... even with SSL, they had a long list of possible threats and vulnerabilities.

Some of the current suggested countermeasures are that businesses have a separate PC that is dedicated solely to online banking operations (and NEVER used for anything else).

a few recent posts on the subject:
https://www.garlic.com/~lynn/2010m.html#38 U.K. bank hit by massive fraud from ZeuS-based botnet
https://www.garlic.com/~lynn/2010m.html#53 Is the ATM still the banking industry's single greatest innovation?
https://www.garlic.com/~lynn/2010m.html#58 memes in infosec IV - turn off HTTP, a small step towards "only one mode"
https://www.garlic.com/~lynn/2010m.html#65 How Safe Are Online Financial Transactions?

--
virtualization experience starting Jan1968, online at home since Mar1970

3270 Emulator Software

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: 3270 Emulator Software
Newsgroups: bit.listserv.ibm-main
Date: 1 Sep 2010 13:57:48 -0700
dalelmiller@COMCAST.NET (Dale Miller) writes:
Actually, this is a reflection on Lynn Wheeler's contribution of 8/30.

re:
https://www.garlic.com/~lynn/2010m.html#80 3270 Emulator Software

in that period ... the executive responsible for IMS had left and joined a large financial institution in the area ... and for some time was out hiring IMS developers ... eventually having a larger IMS development group than STL. Also, email reference (about Jim leaving for Tandem and palming off database consulting with IMS group to me), makes mention of foreign entities having IMS competitive product. In any case, STL management was quite sensitive to issue of competitive IMS development activity.

the IMS FE service group in Boulder area ... was faced with similar situation ... being moved to bldg on the other side of the highway ... and similar HYPERchannel (channel extender) implementation was deployed for them.

In the STL case, there was T3 digital radio (microwave), went to repeater tower on the hill above STL, to dish on top of bldg. 12 (on main plant site) and then to dish on roof of off-site bldg. where the relocated IMS group were sent. After "85" was built (elevated section cutting the corner of the main plant site), radar detectors were triggered when autos drove thru the path of the signal (between tower on the hill and roof of bldg. 12).

In the boulder case, they were moved to bldg across highway from where the datacenter was located. Infrared T1 modems (sort of higher powered version of consumer remote controls) were placed on the roofs of the two bldgs to carry the HYPERchannel signal. There was some concern that there would be "rain-fade" resulting in transmission interruption during severe weather. However, during one of the worst storms ... white-out snow storm when people couldn't get into work, the error monitoring recorded a slight increase in bit-error rate.

However, there was early transmission interruption that would occur in the middle of the afternoon. Turns out that as the sun crossed the sky, it warmed different sides of the bldgs, which resulted in causing the bldgs to slightly lean in different directions. This slight change in bldg. angle was enough to throw-off the alignment of the infrared modems. This resulted in having to position the infrared modem mounted poles on the roof to make them less sensitive to the way the bldgs. leaned as different sides were heated and cooled during the course of the day.

The HYPERchannel vendor attempted to talk the corporation into letting them release my software support. However, there was strong objection from the group in POK that was hoping to eventually get ESCON released (and they felt any improvement in HYPERchannel in the market, would reduce the chance of ever making business case for shipping ESCON). As a result, the HYPERchannel vendor had to reverse engineer my software support and re-implement it from scratch (to ship to customers).

One of the things I had done in the support ... was if I got an unrecoverable transmission error ... I would simulate a "channel check" error in status back to the kernel software. This was copied in the vendor implementation and would result in phone call from the 3090 product administrator several years later. Turns out that the industry service that gathered EREP data and generated summary reports of error statistics ... was showing 3090 "channel checks" being 3-4 times the expected rate.

They tracked it down to HYPERchannel software support generating "channel checks" in simulated ending status. After a little research, I determined that IFCC (interface control checks) resulted in identical same path through error recovery (as "channel checks") and talked the HYPERchannel vendor into changing their software support to simulate IFCC for unrecoverable transmission errors.
https://www.garlic.com/~lynn/subnetwork.html#hsdt

About the same time, there was a different problem inside the corporation ... that seemed to affect STL more than many of the other labs. I had stumbled across the ADVENTURE game at Tymshare (had been ported from PDP10 at stanford to the TYMSHARE vm370/cms commercial timesharing service) ... and managed to obtain a copy of the fortran source ... and made a copy of the CMS executable on the internal network. For a period, some number of locations seemed to have all their computing resources going to employees doing nothing but playing 'ADVENTURE (for people demonstrating that they successfully acquired all points and finished the game, I would send them a copy of the source). STL management would eventually decree that employees would have a 24hr grace period, but after that, any employee caught playing ADVENTURE during standard work hours would be dealt with severely.

misc. past posts mentioning ADVENTURE:
https://www.garlic.com/~lynn/2001m.html#14 adventure ... nearly 20 years
https://www.garlic.com/~lynn/2002m.html#57 The next big things that weren't
https://www.garlic.com/~lynn/2006y.html#18 The History of Computer Role-Playing Games
https://www.garlic.com/~lynn/2007m.html#6 Zork and Adventure
https://www.garlic.com/~lynn/2007o.html#15 "Atuan" - Colossal Cave in APL?
https://www.garlic.com/~lynn/2008s.html#12 New machine code
https://www.garlic.com/~lynn/2008s.html#14 New machine code
https://www.garlic.com/~lynn/2009i.html#16 looking for IBM's infamous "Lab computer"
https://www.garlic.com/~lynn/2010d.html#75 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010d.html#82 Adventure - Or Colossal Cave Adventure

--
virtualization experience starting Jan1968, online at home since Mar1970

Set numbers off permanently

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Set numbers off permanently.
Newsgroups: bit.listserv.ibm-main
Date: 2 Sep 2010 05:27:26 -0700
BillF@MAINSTAR.COM (Bill Fairchild) writes:
I was writing about the prevalence of TSO and ISPF, not their exact birthdates. I should have made that clearer in my previous post.

past references to why VM370 performance products and ISPF development were merged into the same group (also reference to similar reason why JES2 networking and VNET were originally announced as combined product):
https://www.garlic.com/~lynn/2006k.html#50 TSO and more was: PDP-1
https://www.garlic.com/~lynn/2009s.html#46 DEC-10 SOS Editor Intra-Line Editing
https://www.garlic.com/~lynn/2010g.html#6 Call for XEDIT freaks, submit ISPF requirements
https://www.garlic.com/~lynn/2010g.html#50 Call for XEDIT freaks, submit ISPF requirements

side-effect of unbundling, starting to charge for software, and gov. looking over your shoulder ... misc. posts mentioning 23jun69 unbundling
https://www.garlic.com/~lynn/submain.html#unbundle

--
virtualization experience starting Jan1968, online at home since Mar1970

3270 Emulator Software

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: 3270 Emulator Software
Newsgroups: bit.listserv.ibm-main
Date: 2 Sep 2010 07:20:31 -0700
re:
https://www.garlic.com/~lynn/2010m.html#80 3270 Emulator Software
https://www.garlic.com/~lynn/2010m.html#83 3270 Emulator Software

... image of the 3270 logo screen at the offsite IMS location
https://www.garlic.com/~lynn/vmhyper.jpg

3270 logo screen shot

it is from 1980 35mm slide presentation on the effort ... above was 35mm slide of 3270 that was scanned and cropped just to show the logo screen.

another part of the 1980 35mm slide presentation
https://www.garlic.com/~lynn/hyperlink.jpg

HYPERchannel channel extender

showing possible HYPERchannel (network) channel extender operation.

In the mid-80s, NCAR did a SAN/NAS like filesystem implementation using MVS acting as tape<->disk file staging for (non-IBM) supercomputers with HYPERchannel interconnect. Non-IBM systems would send MVS request for some data ... MVS would make sure it was staged to disk, download channel program into the memory of the appropriate HYPERchannel remote device adapter and return a pointer (for that channel program) to the requester. The requester then could directly execute the channel program ... flowing the data directly from disk to the requester (over the HYPERchannel network) w/o requiring it to pass thru MVS (modulo any tape/disk staging).

This became the requirement for "3rd party transfers" in the later standardization effort for HiPPI (100mbyte/sec standards form of Cray channel) & HiPPI switch with IPI-3 disks (i.e. not requiring transferred data to flow thru the control point) ... and also showed up as requirement in FCS (& FCS switch) standards meeting (i.e. what FICON is built on).

In the early 90s, gov. labs were encouraged to try and commercialize technology they had developed. NCAR did a spin-off of their system as "Mesa Archival" ... but implementation rewritten to not require MVS (i.e. support done on other platforms). San Jose disk division invested in "Mesa Archival" ... and we were asked to periodically audit and help them whenever we could (they had offices at the bottom of the hill from where NCAR is located) ... aka San Jose was looking at it as helping them get into large non-IBM-mainframe disk farms.

--
virtualization experience starting Jan1968, online at home since Mar1970

Set numbers off permanently

From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Set numbers off permanently.
Newsgroups: bit.listserv.ibm-main
Date: 3 Sep 2010 07:21:18 -0700
lynn@GARLIC.COM (Anne & Lynn Wheeler) writes:
side-effect of unbundling, starting to charge for software, and gov. looking over your shoulder ... misc. posts mentioning 23jun69 unbundling
https://www.garlic.com/~lynn/submain.html#unbundle


re:
https://www.garlic.com/~lynn/2010m.html#84 Set numbers off permanently

aka one of the issues with adding new features was that the development and other product costs had to be covered by revenue flow ... this could be fudged some by combining different products into the same group and calculating revenue against costs at the group/aggregate level (leveraging revenue from products where the costs had been minimized, to underwrite the costs of other products).

--
virtualization experience starting Jan1968, online at home since Mar1970

Nearly $1,000,000 stolen electronically from the University of Virginia

Refed: **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 03 Sep, 2010
Subject: Nearly $1,000,000 stolen electronically from the University of Virginia
Blog: Financial Crime Risk, Fraud and Security
re:
https://www.garlic.com/~lynn/2010m.html#82 Nearly $1,000,000 stolen electronically from the University of Virginia

American university sees $1 million stolen after banking malware intercepts controller's credentials
http://www.scmagazineuk.com/american-university-sees-1-million-stolen-after-banking-malware-intercepts-controllers-credentials/article/178126/

as i've mentioned before, in the mid-90s, consumer online dialup banking were making presentations about moving to the internet ... largely motivated by the significant consumer support costs associated with proprietary dialup infrastructure (one presentation claimed library of over 60 drivers just to handle different combination of serial-port modems, operating system levels, etc; the problems with serial-port conflicts are well known and was major motivation for USB .... although there seemed to have been a serious lapse at the beginning of the century with an attempted deployment of a new financial related serial-port device).

in the same mid-90s time frame, the commercial/business online dialup cash-management (aka banking) were making presentations that they would NEVER move to the internet because of the large number of threats and vulnerabilities (still seen to this day).

current scenario has countermeasure recommendation (for internet based commercial online banking) is that businesses should have a separate PC that is solely dedicated to online banking and NEVER used for any other purpose.

--
virtualization experience starting Jan1968, online at home since Mar1970

Baby Boomer Execs: Are you afraid of LinkedIn & Social Media?

Refed: **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 04 Sep, 2010
Subject: Baby Boomer Execs: Are you afraid of LinkedIn & Social Media?
Blog: IBMers
I was blamed for computer conferencing on the internal network in the late 70s and early 80s (some estimate that between 20,000 & 30,000 employees were reading some amount of the material, if not directly participating). Folklore is that when the executive committee first learned of the internal network and computer conferencing, 5of6 members wanted to fire me. misc. past posts mentioning internal network:
https://www.garlic.com/~lynn/subnetwork.html#internalnet

Somewhat as a result, researcher was paid to sit in the back of my office for 9 months, taking notes on how I communicated. They also got copies of all my incoming and outgoing email and logs of all instant messages. Result was research report as well as material for papers, books, and Stanford PHD (joint with Language and Computer AI ... in the area of computer mediated communication). One number from the study was that I communicated electronically directly with an avg. of 270 different people per week (for the 9months of the study). misc. past posts mentioning computer mediated communication:
https://www.garlic.com/~lynn/subnetwork.html#cmc

One of the things that they really weren't aware of was that I had tripped across a copy of the game ADVENTURE at tymshare (had been ported from stanford pdp10 to tymshare vm370/cms service) and started distributing it internally. There was a period where it was claimed that a lot of development activity had come to a halt because so many people were playing ADVENTURE internally. misc. past posts mentioning ADVENTURE:
https://www.garlic.com/~lynn/2001m.html#14 adventure ... nearly 20 years
https://www.garlic.com/~lynn/2002m.html#57 The next big things that weren't
https://www.garlic.com/~lynn/2006y.html#18 The History of Computer Role-Playing Games
https://www.garlic.com/~lynn/2007o.html#15 "Atuan" - Colossal Cave in APL?
https://www.garlic.com/~lynn/2008s.html#12 New machine code
https://www.garlic.com/~lynn/2009i.html#16 looking for IBM's infamous "Lab computer"
https://www.garlic.com/~lynn/2010d.html#75 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010d.html#82 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010m.html#83 3270 Emulator Software

--
virtualization experience starting Jan1968, online at home since Mar1970

UAE Man-in-the-Middle Attack Against SSL

Refed: **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 04 Sep, 2010
Subject: UAE Man-in-the-Middle Attack Against SSL
Blog: Schneier on Security
re:
http://www.schneier.com/blog/archives/2010/09/uae_man-in-the-.html

For the most part, self-signed certificates are a side-effect of the software library; they are actually just an entity-id/organization-id paired with a public key. They can be used to populate a repository of trusted public keys (with their corresponding entity-id/organization-id); effectively what is found preloaded in browsers as well as what is used by SSH.

The difference in a Certification Authority paradigm, is that the public keys (frequently encoded in self-signed certificates), from some (say browser) trusted public key repository, can be used to extend trust to other public keys (by validating digital certificates which contain other public keys). The methodology has been extended to form a trust chain ... where cascading public keys are used to extend trust to additional public keys (pared with their entity-id/organization-id).

In the Certification Authority paradigm, all public keys from a user's (possibly browser) trusted public key repository are accepted as being equivalent ... reducing the integrity of the overall infrastructure to the Certification Authority with the weakest integrity (if a weak integrity CA, incorrectly issues a certificate for some well known organization, it will be treated the same as the correctly issued certificate from the highest integrity CA).

Another way of looking at it, digital certificates are messages with a defined structure and purpose, that are validated using trusted public keys that the relying party has preloaded in their repository of trusted public keys (or has been preloaded for them, in the case of browsers).

misc. past posts discussing SSL certs
https://www.garlic.com/~lynn/subpubkey.html#sslcerts

If you are going to maintain your own trusted repository ... what you are really interested in is the trusted server URL/public key pair (contained in the certificate) ... the certificates themselves then become redundant and superfluous and all you really care is if the (server's) public key ever changes ... in which case there may be requirement to check with some authoritative agency as to the "real" public key for that server URL.

We had been called in to consult with small client/server startup that wanted to do payment transactions on their server, the startup had also invented this technology called SSL they wanted to use. Part of the effort was applying SSL technology to processes involving their server. There were some number of deployment and use requirements for "safe" SSL ... which were almost immediately violated.

--
virtualization experience starting Jan1968, online at home since Mar1970


previous, next, index - home