List of Archived Posts

2008 Newsgroup Postings (08/17 - 09/11)

Fraud due to stupid failure to test for negative
Yet another squirrel question - Results (very very long post)
Yet another squirrel question - Results (very very long post)
Medical care
Fraud due to stupid failure to test for negative
Fraud due to stupid failure to test for negative
Yet another squirrel question - Results (very very long post)
Future architectures
Fraud due to stupid failure to test for negative
Unbelievable Patent for JCL
Unbelievable Patent for JCL
Yet another squirrel question - Results (very very long post)
Fraud due to stupid failure to test for negative
Yet another squirrel question - Results (very very long post)
Anyone heard of a company called TIBCO ?
Fraud due to stupid failure to test for negative
Fraud due to stupid failure to test for negative
Fraud due to stupid failure to test for negative
IBM-MAIN longevity
IBM-MAIN longevity
IBM-MAIN longevity
Fraud due to stupid failure to test for negative
Future architectures
Blinkylights
Some confusion about virtual cache
Taxes
Fraud due to stupid failure to test for negative
Fraud due to stupid failure to test for negative
Yet another squirrel question - Results (very very long post)
Quality of IBM school clock systems?
Taxes
Baudot code direct to computers?
IBM THINK original equipment sign
Taxes
Future architectures
IBM THINK original equipment sign
IBM THINK original equipment sign
Baudot code direct to computers?
Baudot code direct to computers?
Baudot code direct to computers?
IBM--disposition of clock business
IBM--disposition of clock business
APL
Baudot code direct to computers?
IBM-MAIN longevity
IBM--disposition of clock business
Baudot code direct to computers?
IBM-MAIN longevity
Blinkylights
Taxes
Taxes
Baudot code direct to computers?
Are family businesses unfair competition?
Are family businesses unfair competition?
Blinkylights
With all the highly publicised data breeches and losses, are we all wasting our time?
With all the highly publicised data breeches and losses, are we all wasting our time?
"Engine" in Z/OS?
Blinkylights
CHROME and WEB apps on Mainframe?
CHROME and WEB apps on Mainframe?
CHROME and WEB apps on Mainframe?
CHROME and WEB apps on Mainframe?
CHROME and WEB apps on Mainframe?
CHROME and WEB apps on Mainframe?
CHROME and WEB apps on Mainframe?
With all the highly publicised data breeches and losses, are we all wasting our time?
Is Virtualisation a Fad?
New technology trends?
Speculation ONLY
Why SSNs Are Not Appropriate for Authentication and when, where and why should you offer/use it?
TJ Maxx - why are they still in business?
What are security areas to be addressed before starting an e-commerce transaction or setting up a portal?
Blinkylights
Speculation ONLY
Fraud due to stupid failure to test for negative
When risks go south: FM&FM to be nationalized
Fraud due to stupid failure to test for negative
When risks go south: FM&FM to be nationalized
Blinkylights
Fraud due to stupid failure to test for negative
Fraud due to stupid failure to test for negative
Data sharing among Industry players about frauds
Fraud due to stupid failure to test for negative
question for C experts - strcpy vs memcpy
Fraud due to stupid failure to test for negative
WSJ finds someone to blame.... be skeptical, and tell the WSJ to grow up
Fraud due to stupid failure to test for negative
Sustainable Web
Fraud due to stupid failure to test for negative
z/OS Documentation - again
Blinkylights
Blinkylights
What do you think are the top characteristics of a good/effective leader in an organization? Do you feel these characteristics are learned or innate to an individual?
How important, or not, is virtualization to cloud computing?
Blinkylights
Blinkylights
Blinkylights
what is the difference between web server and application server?
Blinkylights

Fraud due to stupid failure to test for negative

Refed: **, - **, - **
From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Date: Sun, 17 Aug 2008 11:23:54 -0700 (PDT)
Subject: Re: Fraud due to stupid failure to test for negative
re:
https://www.garlic.com/~lynn/2008l.html#89 Fraud due to stupid failure to test for negative

not transit specific ... but how payment things can go wrong ... lots of past posts referencing YES CARD vulnerability:
https://www.garlic.com/~lynn/subintegrity.html#yescard

it was chip card solution effort started in the mid-90s (about the same time we got involved in x9a10 financial standard working group) and we've characterized the effort as being focused on countermeasure to lost/stolen magstripe cards.

the issue was that by the mid-90s, additional kinds of fraud had also become quite common place.

in the x9a10 financial standard woking group scenaio ... it had been given the requirement to preserve the integrity of the financial infrastructure for all retail payments ... aka ALL, as in ALL (not just point-of-sale, not just internet, not just face-to-face, not just credit, not just debit, not just stored-value, etc). as a result, x9a10 working group had to do detailed end-to-end threat and vulnerability studies of multiple different kinds of retail payments ... and come up with a solution that addressed everything (and also be superfast and super inexpensive ... w/o sacrificing security and integrity).

with the intense myopic concentration on chipcard as solution to lost/ stolen magstripe card vulnerability ... it appeared to lead to situation where the rest of the infrastructure was made more vulnerable. in fact early part of this decade, at an atm industry presentation on YES CARD fraud ... as it started to dawn on the audience the actual circumstance ... there was a spontaneous outburst from somebody in the audience ... "do you mean that they managed to spend billions of dollars to prove that chipcards are less secure than magstripe cards"

in that timeframe there was also a large pilot deployment in the states with a million or so cards. when the YES CARD scenario was explained to people doing the deployment ... the reaction was to make configuration changes in the issuing process of valid cards .... which actually had absolutely no effect on the YES CARD fraud ... which was basically a new kind of point-of-sale terminal vulnerability that had been created as a side-effect of the chipcard specification ... and involved counterfeit cards (not valid, issued cards).

misc. past posts mentioning the x9.59 financial transaction standard (that was product of the x9a10 financial standard working group)
https://www.garlic.com/~lynn/x959.html#x959

some number of other URLs referencing the boston transit:
MIT case shows folly of suing security researchers
http://searchsecurity.techtarget.com/news/column/0,294698,sid14_gci1325406,00.html
Massachusetts: MIT students deserve 'no First Amendment protection'
http://news.cnet.com/8301-1009_3-10017438-83.html?hhTest=1
MIT Subway Hack Paper Published on the Web
http://www.pcmag.com/article2/0%2c2817%2c2327898%2c00.asp
Judge refuses to lift gag order on MIT students in Boston subway-hack case
http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9112641
MIT Presentation on Subway Hack Leaks Out
http://www.darkreading.com/document.asp?doc_id=161424
Exploits & Vulnerabilities: Subway Hack Gets 'A' From Professor, TRO From Judge
http://www.technewsworld.com/story/64118.html?welcome=1218494580

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Yet another squirrel question - Results (very very long post)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Date: Mon, 18 Aug 2008 07:58:51 -0700 (PDT)
Subject: Re: Yet another squirrel question - Results (very very long post)
On Aug 18, 9:51 am, Quadibloc wrote:
Software was written as an afterthought, to help people use those beasts. Gradually, things like compilers and operating systems got included, and some precautions were taken to prevent competitors from freeloading on this effort; thus, IBM unbundled and started charging for software as plug-compatibles started to emerge.

software was free ... 23jun69 unbundling was response to legal action by the gov. and others. it was not only software, but also included system services, hardware maintenance, lots of stuff. the company was able to make the case that kernel software should not be unbundled ... and allowed to remain free.

lots of past posts about unbundling
https://www.garlic.com/~lynn/submain.html#unbundle

one of the unbundling issues was software engineering services ... before unbundling ... groups of SEs would work at the customer site ... and brand new SEs would effectively get apprentice training as part of such a team. After unbundling ... all the SE time spent at the customer had to be charged for ... and nobody came up with a good mechanism for charging for SEs in training.

This was what spawned the original idea for HONE (Hands-On Network Environment) ... basically some number of (virtual machine) CP67 datacenters around the country providing SEs with hands-on operating system experience (dos, mft, mvt, etc). lots of past HONE postings
https://www.garlic.com/~lynn/subtopic.html#hone

i've mentioned recently that as undergraduate in the 60s, i was also involved in doing a mainframe clone controller ...
https://www.garlic.com/~lynn/2008l.html#23 Memories of ACC, IBM Channels and Mainframe Internet Devices

and a write-up listing us as cause of the clone-controller (or pcm, plug-compatible) business.
https://www.garlic.com/~lynn/submain.html#360pcm

The 360 pcm/clone controller business was a major motivation behind the future system effort
https://www.garlic.com/~lynn/submain.html#futuresys

however, the distraction of the future system business then helped/allowed some number of plug-compatible computers (as opposed to controllers) to gain a market foot-hold. After future system effort was killed, there was a mad rush to get stuff back into the 370 product pipeline and also figure out how to deal with the plug-compatible computers. Part of this was decision to start charging for kernel software (reverse earlier justification to not unbundle kernel software). Recent reference to talk that Amdahl gave at MIT in the early 70s about his justification (for plug-compatible mainframe company) that was used with the VCs/investors:
https://www.garlic.com/~lynn/2008g.html#54 performance of hardware dynamic scheduling

As i've recently mentioned, the mad rush to get stuff back into the 370 product pipeline ... appeared to contribute to picking up a lot of 370 stuff i'd been doing for "CSC/VM" (all during the future system period).
https://www.garlic.com/~lynn/2008l.html#72 Error handling for system calls
https://www.garlic.com/~lynn/2008l.html#82 Yet another squirrel question - Results (very very long post)
https://www.garlic.com/~lynn/2008l.html#85 old 370 info

and releasing as products. Part of the "CSC/VM" work was the "resource manager" which was packaged for release as a separate kernel product ... and it was also selected to be the guinea pig for change (in policy) to start charging for kernel software (effectively in reaction to the plug compatible processors that got a foot-hold in the market during the period of the future system distraction)

as an aside ... a lot of what was in the "resource manager", i had earlier done as undergraduate in the 60s and had been released as part of cp67 ... but was dropped in the morph of cp67 to vm370. misc. past posts mentioning scheduler (major component of the resource manager)
https://www.garlic.com/~lynn/subtopic.html#fairshare

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Yet another squirrel question - Results (very very long post)

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Yet another squirrel question - Results (very very long post)
Newsgroups: alt.folklore.computers
Date: Tue, 19 Aug 2008 08:03:08 -0400
lynn writes:
As i've recently mentioned, the mad rush to get stuff back into the 370 product pipeline ... appeared to contribute to picking up a lot of 370 stuff i'd been doing for "CSC/VM" (all during the future system period).
https://www.garlic.com/~lynn/2008l.html#72 Error handling for system calls
https://www.garlic.com/~lynn/2008l.html#82 Yet another squirrel question - Results (very very long post)
https://www.garlic.com/~lynn/2008l.html#85 old 370 info

and releasing as products. Part of the csc/vm work was to release the "resource manager" as a separately packaged kernel product ... and it was also selected to be the guinea pig for change in policy to start charging for kernel software (effectively in reaction to the plug compatible processors that got a foot-hold in the market during the period of the future system distraction)


re:
https://www.garlic.com/~lynn/2008m.html#1 Yet another squirrel question - Results (very very long post)

a lot of the resource manager was actually stuff that i had done as undergraduate in the 60s on cp67 and released in that product ... but dropped in the morph from cp67 to vm370 ... which included some amount of simplification. For instance, the morph from cp67 to vm370 also dropped much of the fastpath stuff I had done in cp67 (especially in the interrupt handlers). One of the first thing I had done (once the science center had gotten a 370) was to re-implement a lot of fastpath stuff in vm370. That actually had been incorporated and shipped in something like release 1plc9 (i.e. "PLCs" were monthly updates ... plc9, would have been the 9th monthly update to the initial vm370 release).

one of the of the other things that i got roped into ... besides some of the stuff mentioned in this recent post
https://www.garlic.com/~lynn/2008l.html#83 old 370 info
https://www.garlic.com/~lynn/2008l.html#85 old 370 info

was a 5-way SMP project, code named VAMPS ... which was canceled before it shipped ... misc. past posts
https://www.garlic.com/~lynn/submain.html#bounce

the basic design was then picked up when it was decided to release SMP support in the standard vm370 product. A problem was that the resource manager had already been shipped as guinea pig for charging for kernel software. As part of that activity, i got to spend a lot of time with contracts and legal people working on policies for kernel software charging. The "initial" pass (charging for kernel software) was that kernel software directly related to hardware operation would still be free (device drivers, smp support, etc) ... but other stuff could be charged for. misc. past posts mentioning unbundling and/or my resource manager being the guinea pig for change to start charging for kernel software
https://www.garlic.com/~lynn/submain.html#unbundle

The already shipped resource manager didn't directly contain any SMP support ... but it did have some amount of kernel reorg and facilities that the SMP design was dependent on. When it came time to ship the SMP code ... it created something of a dilemma ... since it would violate policy to require the customer to purchase the "resource manager" in order for (the free) multiprocessing support to work. The dilemma was resolved by moving all the dependant code out of the resource manager and into the free kernel base (which was 80-90 percent of the actual lines of code in the initial/original resource manager release).

past posts mentioning SMP (and/or charlie inventing the compare&swap instruction)
https://www.garlic.com/~lynn/subtopic.html#smp

other past posts in this thread:
https://www.garlic.com/~lynn/2008l.html#78 Yet another squirrel question - Results (very very long post)
https://www.garlic.com/~lynn/2008l.html#84 Yet another squirrel question - Results (very very long post)
https://www.garlic.com/~lynn/2008l.html#86 Yet another squirrel question - Results (very very long post)
https://www.garlic.com/~lynn/2008l.html#87 Yet another squirrel question - Results (very very long post)

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Medical care

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Medical care
Newsgroups: alt.folklore.computers
Date: Tue, 19 Aug 2008 08:58:53 -0400
Lars Poulsen <lars@beagle-ears.com> writes:
There are, unfortunately, too many nursing homes that fit this description. This level of care can be done on an industrial scale, and tends to be what Medicare and similar programs will pay for. To be better than that, needs a slightly higher staffing level, and requires a slightly higher grade of employee, supervised by managers that can tell the difference and who care about the residents' quality of life. Such facilities exist, but struggle to find enough residents that can pay the cost (often about twice the price at the industrial facility).

in another fora ... in much earlier thread about retiring baby boomers increasing the retired population by a factor of four ... and the following generation has only half as many workers ... for overall increase of eight times in the ratio of retirees to workers ... one of the other posters complained that it was becoming increasingly hard to find workers providing geriatric services. however, i pointed out that the general explosion in the ratio of retirees to workers ... also applies to workers providing geriatic services (will find that there are only 1/8th as many workers per retiree, including workers for providing geriatic services).

The other issue is that Medicare reimbursements are typically actually below cost of services .... forcing establishments to subsidize Medicare patients from other income sources ... or refusing to take Medicare patients.

There have been some number of articles that one of the Japanese motivations for work on robots ... is to fill the gap in providing services to the geriatric generation.

past posts mentioning baby boomer retirement
https://www.garlic.com/~lynn/2008b.html#3 on-demand computing
https://www.garlic.com/~lynn/2008c.html#16 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#69 Toyota Beats GM in Global Production
https://www.garlic.com/~lynn/2008f.html#99 The Workplace War for Age and Talent
https://www.garlic.com/~lynn/2008g.html#1 The Workplace War for Age and Talent
https://www.garlic.com/~lynn/2008g.html#50 CA ESD files Options
https://www.garlic.com/~lynn/2008h.html#3 America's Prophet of Fiscal Doom
https://www.garlic.com/~lynn/2008h.html#11 The Return of Ada
https://www.garlic.com/~lynn/2008h.html#26 The Return of Ada
https://www.garlic.com/~lynn/2008h.html#57 our Barb: WWII
https://www.garlic.com/~lynn/2008i.html#56 The Price Of Oil --- going beyong US$130 a barrel
https://www.garlic.com/~lynn/2008i.html#98 dollar coins
https://www.garlic.com/~lynn/2008j.html#80 dollar coins
https://www.garlic.com/~lynn/2008k.html#5 Republican accomplishments and Hoover
https://www.garlic.com/~lynn/2008l.html#37 dollar coins

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Fraud due to stupid failure to test for negative

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Fraud due to stupid failure to test for negative
Newsgroups: alt.folklore.computers
Date: Tue, 19 Aug 2008 09:46:02 -0400
Charlton Wilbur <cwilbur@chromatico.net> writes:
They're bureaucrats. The top-level ones are appointed as a reward for political toadying; the standard for a good decision is not whether it actually solves the problem but whether it's politically defensible.

nearly all large static (bureaucratic) environments ... drift towards maintaining the status quo ... in static environment ... success isn't being able to overcome and deal with problems ... but in being able to get along with (and support) the other bureaucrats.

it typically is only in changing environment ... where there is a higher premium placed on actually being able to solve problems ... than being able to get along & support the other members.

for other drift ... a frequent example used was great britain appointing lords as military leaders going into WW1.

there has been some suggestion that natural selection similarly contributes to the distribution of "myers-briggs" personality types ... that relatively static environments tend to favor the "social member" types ... as opposed to the "solve problem" types (which are frequently also labeled "independent" ... another indication of where society places its value)

there then can be discontinuities ... when new problems actually need to be solved ... and frequently the knee-jerk response is to blame the ones that actually exposed the problems (as opposed to the bureaucracy responsible for the problems). There is a little of the emperor's new clothes parable in this.

for other drift, Boyd saw a huge amount of this in attempting to address problems in large military bureaucracy) ... misc. past posts mentioning Boyd (and/or OODA-loops)
https://www.garlic.com/~lynn/subboyd.html

and for more Boyd topic drift, there was recent note that Boyd's strategy and OODA-loops is now cornerstone of this executive MBA program
http://www.familybusinessmba.kennesaw.edu/home

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Fraud due to stupid failure to test for negative

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Fraud due to stupid failure to test for negative
Newsgroups: alt.folklore.computers
Date: Tue, 19 Aug 2008 14:54:08 -0400
lynn writes:
re:
https://www.garlic.com/~lynn/2008l.html#89 Fraud due to stupid failure
to test for negative

not transit specific ... but how payment things can go wrong ... lots of past posts referencing YES CARD vulnerability:
https://www.garlic.com/~lynn/subintegrity.html#yescard


re:
https://www.garlic.com/~lynn/2008m.html#0 Fraud due to stupid failure to test for negative

a post from more than two years ago discussing (new) appearance of flaws and vulnerabilities
https://www.garlic.com/~lynn/2006l.html#33

including this reference to trials held in 1997
http://www-03.ibm.com/industries/financialservices/doc/content/solution/1026217103.html

and reports of flaws, exploits, and vulnerabilities started to appear within a year or so of the trials (aka decade ago).

now comes reports that flaws and vulnerabilities are a brand new discovery

Criminal gangs in new Chip and Pin fraud
http://www.workplacelaw.net/news/display/id/16140
Chip and pin fraud could hit city stores
http://www.walesonline.co.uk/news/wales-news/2008/08/15/chip-and-pin-fraud-could-hit-city-stores-91466-21537769/
Probe uncovers first chip-and-pin card fraud
http://www.financialdirector.co.uk/accountancyage/news/2224006/probe-uncovers-first-chip-pin
Chip And Pin Fraud On The Increase
http://financialadvice.co.uk/news/2/creditcards/7542/Chip-And-Pin-Fraud-On-The-Increase.html
Fraudsters hijacking Chip and Pin
http://www.metro.co.uk/news/article.html?in_article_id=263563&in_page_id=34
Police warn of new chip-and-pin fraud
http://www.financemarkets.co.uk/2008/08/13/police-warn-of-new-chip-and-pin-fraud/
Gangs develop new chip-and-pin fraud
http://business.timesonline.co.uk/tol/business/industry_sectors/technology/article4525429.ece
Criminals Crack Chip-and-Pin Technology Wide Open
http://security.itproportal.com/articles/2008/08/14/criminals-crack-chip-and-pin-technology-wide-open/
Fraudsters have hijacked Chip and PIN
http://security.itproportal.com/articles/2008/08/14/fraudsters-have-hijacked-chip-and-pin/
Police warn of security threat to every chip-and-Pin terminal
http://www.computerweekly.com/Articles/2008/08/18/231841/police-warn-of-security-threat-to-every-chip-and-pin.htm
Police Warns About Chip and Pin Shortcomings While More Scam Suspects
Arrested
http://security.itproportal.com/articles/2008/08/19/police-warns-about-chip-and-pin-shortcomings-while-more-scam-suspects-arrested/
Major bank card scam uncovered
http://www.irishtimes.com/newspaper/breaking/2008/0818/breaking84.htm
Chip and Pin protection cracked like a rotten foreign egg
http://www.itwire.com/content/view/20035/53/
Gangs have cracked Chip and PIN cards, say police
http://www.computerweekly.com/Articles/2008/08/13/231816/gangs-have-cracked-chip-and-pin-cards-say-police.htm
Chip and PIN gang busted by specialist police unit
http://www.theinquirer.net/gb/inquirer/news/2008/08/14/chip-pin-gang-busted-police
New chip-and-pin danger
http://www.qas.co.uk/company/data-quality-news/new_chip_and_pin_danger_2574.htm
Credit card code? What code?
http://www.latimes.com/business/investing/la-tr-insider17-2008aug17,0,6886084.story
Analysis: The rise (and fall) of Chip and PIN
http://www.itpro.co.uk/605568/analysis-the-rise-and-fall-of-chip-and-pin
Warning as gang clone bank cards
http://ukpress.google.com/article/ALeqM5jZIV6H0MsQgs6-4vl4_tATlpXn_g

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Yet another squirrel question - Results (very very long post)

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Yet another squirrel question - Results (very very long post)
Newsgroups: alt.folklore.computers
Date: Wed, 20 Aug 2008 08:47:00 -0400
Roland Hutchinson <my.spamtrap@verizon.net> writes:
You are more likely to have known it from music department concerts than from anything I ever had do do with the LOTS community, or maybe from my involvement in the successful effort to persuade the Stanford Libraries that their shiny new online public catalog would be a disaster for researchers (especially in music, but also in other humanities fields and elsewhere) if they went ahead and closed the card catalog without fixing some of the online system's most blatant shortcomings with respect to collocation, forms of entry, and cross references. The whole database was of course full of things that could be filed correctly by humans but were not uniform enough to appear identical to a machine of very little brain. (They kept the card catalog going for an extra year once enough people on campus realized what the problems were and started to complain and took the time to fix the worst of the problems; I'm afraid getting the complaining rolling was largely my doing.)

re:
https://www.garlic.com/~lynn/2008l.html#80 Book: "Everyone Else Must Fail" --Larry Ellison and Oracle ???

somewhat in conjunction with talking to the people at NLM (middle of last decade) ... we also dropped by people at lane medical library a couple of times ... there was small play related to superman. this timeline has it going online in '87.
http://lane.stanford.edu/portals/history/chronlane.html

and
http://lane.stanford.edu/100years/history.html

from above ... related to NLM (as opposed to LOIS)
In the 1950s and 1960s, Lane's one and only reference librarian (Anna Hoen) spent her mornings scanning new journal arrivals and telephoning individual faculty to help them stay abreast of the current literature. In 1971, Lane joined a handful of experimental libraries to use AIM-TWX, the first computerized search protocol for Index Medicus (the precursor to MEDLINE). With the web revolution in the 1990s, Lane rapidly expanded its online journal subscriptions and provided access for physicians and students.

... snip ...

a couple weeks ago we got a tour of LOC ... including going into the (physical) card catalog (1980 and earlier)


http://www.loc.gov/rr/main/inforeas/card.html

from above:
The Main Card Catalog, located adjacent to the Main Reading Room on the first floor of the Jefferson Building, contains subject, author, title, and some other cards for most books cataloged by the Library through 1980 (1978 for subject cards). Each work cataloged is represented by a card or set of cards showing the name of the author, the title of the book, the place of publication, the publisher, and the date of publication. This information is followed by the number of pages or volumes, a brief description of the illustrative material, and the height in centimeters. If the book is part of a series, the name of the series is shown in parentheses after the size. A call number, consisting of a combination of letters and numbers, appears in the upper left-hand corner of the card and/or is printed in the lower portion of the card.

... snip ...

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Future architectures

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Future architectures
Newsgroups: alt.comp.hardware.pc-homebuilt,comp.arch,sci.electronics.design
Date: Wed, 20 Aug 2008 09:15:47 -0400
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
I knew then when Intel 286 so-called virtual memory looked like, and I don't call it virtual memory. Nor, interestingly, did most of the people in IBM I talked to - they took a HELL of a long time to learn about virtual memory, but did eventually learn. Other people seem slower.

of course I'll mostly agree with you ... except for small pockets like the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

some of the people from ctss had gone to the science center on the 4th flr ... and some went to multics on the 5th flr.

science center had done virtual machine implementation in the mid-60s. original was cp40 ... running on a modified 360/40 with address relocation hardware ... and morphed into cp67 when 360/67 (with standard address relocation hardware) became available.

as undergraduate in the late 60s, i rewrote much of cp67 code ... including the virtual memory management and things like page replacement (including creating a global LRU page replacement ... when much of the academic efforts of the period were directed at local LRU page replacement).

this showed up later in the early 80s ... when one of Jim's co-workers at Tandem had done his stanford phd thesis on page replacement algorithms (very similar to what i had done as undergraduate in the late 60s) and there was enormous pressure not to grant a phd on something that wasn't local LRU ... old communication
https://www.garlic.com/~lynn/2006w.html#email821019
in this post
https://www.garlic.com/~lynn/2006w.html#46

a lot of the work that i had done as undergraduate in the 60s (that had been picked up and shipped cp67 product) ... was dropped in the simplification morph of cp67 (from 360/67) to vm370 (when general availability of address relocation was announced for 370 computers, i.e. 360/67 was only 360 model that had address relocation as standard feature).

for other drift ... a recent folklore post about that period (mostly related to unbundling announcement and starting to charge for software)
https://www.garlic.com/~lynn/2008m.html#1
https://www.garlic.com/~lynn/2008m.html#2

for other folklore ... the announcement that all 370s would ship with virtual memory support ... required that all the other operating systems had to now add support for address relocation. one of the big issues was the heritage of application programs creating (i/o) channel programs and passing them to the supervisor for initiation/execution. While instruction addresses went through address relocation ... i/o channel programs didn't ... they continued to be "real". This created a disconnect ... since application programs (running in virtual address mode) would now be creating the channel programs with virtual addresses. This required the supervisor to create a copy of the passed i/o channel programs (created by applications) and substituting real addresses for the virtual addresses.

CP67 had this kind of translation mechanism from the very beginning ... since it had to take the I/O channel programs created in the virtual machines ... make a copy ... coverting all the virtual machine "virtual" addresses into real addresses. The initial transition of the flagship batch operating system (MVT) to virtual memory operation ... involved some simple stub code in MVT ... giving it a single large virtual address space (majority of code continued to run as if it was on real machine that had real storage equivalent to large address space) and crafting "CCWTRANS" (from cp67) into the i/o supervisor (for making the copies of application i/o channel programs, substituting real addresses for virtual). some recent posts mentioning "CCWTRANS"
https://www.garlic.com/~lynn/2008g.html#45 authoritative IEFBR14 reference
https://www.garlic.com/~lynn/2008i.html#68 EXCP access methos
https://www.garlic.com/~lynn/2008i.html#69 EXCP access methos

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Fraud due to stupid failure to test for negative

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Fraud due to stupid failure to test for negative
Newsgroups: alt.folklore.computers
Date: Wed, 20 Aug 2008 09:34:30 -0400
"Joe Morris" <j.c.morris@verizon.net> writes:
I posted earlier that the presentation was to be at BlackHat; based on the consensus of the articles quoted by Lynn it looks like my source was wrong and the planned presentation was at Defcon. Sorry 'bout that, Chief.

re:
https://www.garlic.com/~lynn/2008m.html#0 Fraud due to stupid failure to test for negative

what is black hat and what is defcon can get quite blurred ... since they are held in conjunction.

black hat, las vegas, 2-7aug
http://www.blackhat.com/
defcon, las vegas, 8-10aug
http://www.defcon.org/

picture shows DEFCON

Federal Judge Throws Out Gag Order Against Boston Students in Subway Case
http://blog.wired.com/27bstroke6/2008/08/federal-judge-t.html

this talks about DNS exploit:

Black Hat 2008 Aftermath
http://www.law.com/jsp/legaltechnology/pubArticleLT.jsp?id=1202423911432

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Unbelievable Patent for JCL

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Unbelievable Patent for JCL
Newsgroups: bit.listserv.ibm-main
Date: Wed, 20 Aug 2008 11:37:04 -0400
howard.brazee@CUSYS.EDU (Howard Brazee) writes:
I'm trying to figure out how to use computers in this function of the patent office. It would have to know how to find software patent ideas under a different name, to look at graphics, and use foreign databases. Someday computers will be able to do that task, but possibly not until after patents have outlived their usefulness.

there is some lore that (at least) some patents are apparently purposefully mis-categorized ... as part of strategy for subsequent litigation.

I've seen some past references to bayesian cluster analysis of patent applications ... that found possibly 30percent of computer &/or software related patents filed in other categories.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Unbelievable Patent for JCL

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Unbelievable Patent for JCL
Newsgroups: bit.listserv.ibm-main
Date: Wed, 20 Aug 2008 13:25:27 -0400
John.Mckown@HEALTHMARKETS.COM (McKown, John) writes:
Historically, software was copyrighted or "trade secret". But some court case in the US really messed that up (don't remember the case name). Since then, software patents have been pretty much a "free ride". Only recently have the courts started getting after frivolous software patents. Imagine, if you will, what would have happened if software patents had been around in the MVT days. The only scheduling package would likely be CA-7. The only tape management package would be CA-1. And, if properly written, the patent for those would be so broad as to have exclude similar functionality on non-MVT/MVS systems!

In the 60s, as undergraduate I had done a lot of dynamic, adaptive scheduling for cp67. A lot of this was dropped in the (simplification) morph from cp67 to vm370.

I continued to do 360/370 (cp67 & vm370) stuff during the future system era ... recent discussion of the period related to unbundling:
https://www.garlic.com/~lynn/2008m.html#1
https://www.garlic.com/~lynn/2008m.html#2

after future system effort was killed ... misc. past post
https://www.garlic.com/~lynn/submain.html#futuresys

there was a mad rush to get stuff back into the 370 product pipeline (both software & hardware) ... and this was somewhat behind motivation to (re)releasing the stuff as "resource manager". Also, the distraction during the future system period is claimed to have significantly contributed to clone processors gaining market foothold. The original 23jun69 unbundling (response to various litigaction) managed to make the case that kernel software should still be free. However, the appearance of clone processors appeared to motivate change in policy and start to also charge for kernel software ... and my "resource manager" got selected to be guinea pig for kernel software charging.

I also got told by people from corporate hdqtrs that my resource manager wasn't sophisticated enough ... that all the other resource managers in that era had lots of (manual) "tuning knobs" ... and my resource manager was deficient in the number of such "tuning knobs". It fell on deaf ears that the resource manager implemented its own dynamic adaptive scheduling ... and therefor didn't require all those manual tuning knobs ... and so I had to retrofit (at least the appearance) of some number of manual tuning knobs to get it by the corporate hdqtrs experts.

Nearly a decade later (and nearly two decades after I had done the original work as an undergraduate for cp67), some corporate lawyers contacted me for examples of my original work. It supposedly represented "prior art" in some (scheduling) patent litigation that was going on at the time.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Yet another squirrel question - Results (very very long post)

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Yet another squirrel question - Results (very very long post)
Newsgroups: alt.folklore.computers
Date: Wed, 20 Aug 2008 21:16:48 -0400
Rich Alderson <news@alderson.users.panix.com> writes:
That was followed by an IBM grant of some RTs, followed by RS/6000s running AIX, taking another half row, and the NeXT cubes, and pretty soon you were hard pressed to find a dumb terminal.

old posting with reference to summer '81 survey of (visits to) computing at various institutions (CMU, Bell Labs, LBL, Stanford, MIT, others)
https://www.garlic.com/~lynn/2001l.html#61 MVS History (all parts)

from above (survey extract):


Stanford
Name            CPU     Mips    Memory  Disk    Total   Concurrent
(megs)  (megs)  Users   Users
SAIL            KL10(2) 3.6     10      1600    230     70
SCORE           20/60   2.0     4       400     230     55
VAX1            11/780  1.1     4       400     ?       small
VAX2            11/780  1.1     2       200     ?       small
IBM             4331    0.5     4       8-3310s 30      8
(16)            Alto    0.3/4.8 0.25/4  2/32    16      16

... snip ...

the 4331 was part of a joint study with PASC and only in use by people involved in the study.

the previous posting listed tables of machines at the mentioned institutions (from the survey). the survey also included descriptions of some number of other institutions ... including xerox sdd ... from that survey:
They have more machines than people. There are 300 machines for 200 employees. At least five of the machines are DORADOs (3 mips); the rest are a mixture of ALTOs, D machines, and Stars. Everyone has at least an ALTO in his office. All the machines are tied together with a 10 megabit Ethernet. On the net there are at least two file servers and various xerographic printers including a color printer

... snip ...

In addition to the table of machines at MIT ... the survey also mentioned (at MIT):
The 26 LISP machines are connected to the CHAOS net, and thus to several of the KA10s. Most of the VLSI work is being done on these machines. MIT is currently building them at the rate of two per month, at a cost of $50k to $100k each.

... snip ...

Visit to Larry Landweber at Univ of Wisc ... the computer science dept:


CPU              Mips

VAX 11/780       1.1
PDP 11/70        1.1
PDP 11/45        0.5
PDP 11/40        0.4
LSI 11/23 (8)    0.3
UNIVAC 1100/82
HP 3000

... snip ...

also mentioned in the survey (regarding univ. of wisc):
NSF has also just given Wisconsin, the Rand Corporation, and a few other smaller universities a grant to develop CSNET, a network to connect Computer Science research facilities. CSNET will connect ARPANET and other existing networks together. (This is not the same as BITNET, the RSCS based network being developed by CUNY and Yale). CSNET will be used to send messages, mail, and files between all computer science research groups.

... snip ...

part of the survey was looking at split between institutions going to individual (networked) personal computers ... versis terminals into shared machines (in bell labs case "project" machines) ... much more detailed Bell Labs portion of the summer '81 survey reproduced here
https://www.garlic.com/~lynn/2006n.html#56 AT&T Labs vs. Google Labs - R&D History

other past posts in this thread:
https://www.garlic.com/~lynn/2008l.html#78 Yet another squirrel question - Results (very very long post)
https://www.garlic.com/~lynn/2008l.html#82 Yet another squirrel question - Results (very very long post)
https://www.garlic.com/~lynn/2008l.html#84 Yet another squirrel question - Results (very very long post)
https://www.garlic.com/~lynn/2008l.html#86 Yet another squirrel question - Results (very very long post)
https://www.garlic.com/~lynn/2008l.html#87 Yet another squirrel question - Results (very very long post)
https://www.garlic.com/~lynn/2008m.html#1 Yet another squirrel question - Results (very very long post)
https://www.garlic.com/~lynn/2008m.html#2 Yet another squirrel question - Results (very very long post)
https://www.garlic.com/~lynn/2008m.html#6 Yet another squirrel question - Results (very very long post)

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Fraud due to stupid failure to test for negative

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Fraud due to stupid failure to test for negative
Newsgroups: alt.folklore.computers
Date: Thu, 21 Aug 2008 09:14:08 -0400
jmfbahciv <jmfbahciv@aol> writes:
I saw his answer this morning. He said he worked with networks and was familiar with the concept. I don't understand this; do you, Lynn?

Or could he be confusing the cybercurd with object oriented languages. This morning I asked if he read the bio of the guy who wrote it up.

I'm getting a funny feeling about OO and object oriented confusions but I hope I'm wrong.


re:
https://www.garlic.com/~lynn/2008m.html#4 Fraud due to stupid failure to test for negative

one possible scenario is that some amount of DOD networking is concentrated on cyber warfare ... both offensive and defensive. Boyd's OODA-loops evolved out of military conflict ... but his briefings started to get into applicability of OODA-loops to other types of competitive environments (commercial, business).

a recent example

Buzz of the Week: A cyberwar paradox
http://www.fcw.com/print/22_26/news/153509-1.html?topic=security

the above makes references to earlier articles about Air Force touting is cyber command and then article that it was suspending it:
So it was curious that on Aug. 12, the same day of the New York Times story, former FCW reporter Bob Brewin broke the story for Government Executive — confirmed by FCW — that the Air Force was suspending its cyber command program. As trumpeted in Air Force TV ads, the Cyber Command was seen as a way for DOD to coordinate its cyber warfare initiatives, both offensive and defensive. In October 2007, FCW named Air Force Maj. Gen. William Lord, who was leading the command, as a government Power Player.

... snip ...

additional conjecture is OODA-loop possibly being used out-of-context with no reference to its history and origin.

as to my original post that also drifted into emperor's new clothes parable ... there is always the frequent references to what happens to the messenger (bearer of bad news). there could be more than a little of that in the injunction response to the MIT/transit presentation.

i've also referenced the emperor's new clothes parable and long-winded decade old post that included mention of need for visibility into underlying values of CDO-like instruments (in part, because two decades ago, toxic CDOs had been used in the S&L crisis to obfuscate underlying values ... and "unload" the properties ... for significant more than they were actually worth).
https://www.garlic.com/~lynn/aepay3.htm#riskm

there was an article in the washington post a couple days ago about documents from 2006 by GSE executives about their brilliant/wonderful strategy moving into subprime mortgage (toxic) CDOs ... sort of left hanging in the air was obviously the strategy wasn't that wonderful ... but no comment about replacing those executives (which has been happening at other institutions that had followed similar strategy).

recent posts mentioning emperor's new clothes parable
https://www.garlic.com/~lynn/2008j.html#40 dollar coins
https://www.garlic.com/~lynn/2008j.html#60 dollar coins
https://www.garlic.com/~lynn/2008j.html#69 lack of information accuracy
https://www.garlic.com/~lynn/2008k.html#10 Why do Banks lend poorly in the sub-prime market? Because they are not in Banking!
https://www.garlic.com/~lynn/2008k.html#16 dollar coins
https://www.garlic.com/~lynn/2008k.html#27 dollar coins

other past posts in this thread:
https://www.garlic.com/~lynn/2008l.html#89 Fraud due to stupid failure to test for negative
https://www.garlic.com/~lynn/2008m.html#0 Fraud due to stupid failure to test for negative
https://www.garlic.com/~lynn/2008m.html#5 Fraud due to stupid failure to test for negative
https://www.garlic.com/~lynn/2008m.html#8 Fraud due to stupid failure to test for negative

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Yet another squirrel question - Results (very very long post)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Yet another squirrel question - Results (very very long post)
Newsgroups: alt.folklore.computers
Date: Thu, 21 Aug 2008 12:44:22 -0400
Quadibloc <jsavard@ecn.ab.ca> writes:
Software was written as an afterthought, to help people use those beasts. Gradually, things like compilers and operating systems got included, and some precautions were taken to prevent competitors from freeloading on this effort; thus, IBM unbundled and started charging for software as plug-compatibles started to emerge.

re:
https://www.garlic.com/~lynn/2008m.html#1 Yet another squirrel question - Results (very very long post)
https://www.garlic.com/~lynn/2008m.html#2 Yet another squirrel question - Results (very very long post)

note that original 23jun69 unbundling announcement was in response to various litigation by the gov. and others. they had managed to make the case that kernel software sould still be free. however, with the distraction of future system
https://www.garlic.com/~lynn/submain.html#futuresys

it is claimed to have significantly contributed to clone processors gaining foot-hold in the market. those clone processors then contributed to the decision to (also) start charging for kernel software (initially just kernel software that wasn't directly involved in low-level hardware support).

somewhat related recent archeological post in comp.arch
https://www.garlic.com/~lynn/2008m.html#7 Future architectures

and recent resource manager archeological post in bit.listserv.ibm-main
https://www.garlic.com/~lynn/2008m.html#10 Unbelievable Patent for JCL

referencing in the mid-80s, being contacted by corporate lawyers involved in some sort scheduling related patent litigation and looking for copies of stuff that I had done nearly two decades earlier as undergraducate (as example of prior art)

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Anyone heard of a company called TIBCO ?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Anyone heard of a company called TIBCO ?
Newsgroups: bit.listserv.ibm-main
Date: Thu, 21 Aug 2008 15:27:05 -0400
tbabonas@COMCAST.NET (Tony B.) writes:
Supposedly they develop mainframe/open systems related products.

shortly after they were established as independent company, we had been brought in for week's presentation
https://en.wikipedia.org/wiki/TIBCO

and from some '97 archive:
Internet Publish and Subscribe Protocol

TIBCO Inc., and more than a dozen Internet companies have endorsed a proposed new industry standard for the "push" model of information distribution over the Internet. The proposed standard, called publish and subscribe, will reduce Internet traffic and make it easier to find and receive information on-line. The companies, which include Cisco Systems, Inc., CyberCash, Informix, Infoseek, JavaSoft, Sun Microsystems, Verisign, NETCOM, and others in addition to TIBCO, announced plans, products and support for publish and subscribe. TIBCO and Cisco Systems are developing an open reference specification for publish and subscribe technology.


... snip ...

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Fraud due to stupid failure to test for negative

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Fraud due to stupid failure to test for negative
Newsgroups: alt.folklore.computers
Date: Thu, 21 Aug 2008 16:25:31 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
there was an article in the washington post a couple days ago about documents from 2006 by GSE executives about their brilliant/wonderful strategy moving into subprime mortgage (toxic) CDOs ... sort of left hanging in the air was obviously the strategy wasn't that wonderful ... but no comment about replacing those executives (which has been happening at other institutions that had followed similar strategy).

re:
https://www.garlic.com/~lynn/2008m.html#12 Fraud due to stupid failure to test for negative

business news program just "asked" what did the GSEs do wrong? ... and their immediate answer: they bought $5bil in (toxic) CDOs with $80mil of capital ... i.e. heavily leveraged -- not quite 100times (the potential $25bil bailout estimates for GSEs seems to be all holdings)

this seems penny-ante stuff compared to other institutions that have already taken approx. $500bil in write-downs (in frequently, previously triple-A rated toxic CDOs, and projections will eventually be $1-$2 trillion.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Fraud due to stupid failure to test for negative

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Fraud due to stupid failure to test for negative
Newsgroups: alt.folklore.computers
Date: Fri, 22 Aug 2008 10:47:46 -0400
jmfbahciv <jmfbahciv@aol> writes:
Getting their names and trying to follow where they go would be a wise way of predicting where the next mess will happen in 8-10 years. There's a guy, whose first name is Sandy and I can't remember his last, who seems to get in the middle of messes. I have not determined if he is an attractor or a catalyst yet.

re:
https://www.garlic.com/~lynn/2008m.html#12 Fraud due to stupid failure to test for negative
https://www.garlic.com/~lynn/2008m.html#15 Fraud due to stupid failure to test for negative

try here:
http://www.pbs.org/wgbh/pages/frontline/shows/wallstreet/weill
as well as
http://www.pbs.org/wgbh/pages/frontline/shows/wallstreet/weill/demise.html

news this morning is that Bernanke is saying that there won't be anymore making investment banks whole. a possible clinker is that with the repeal of Glass-Steagall (i.e. passed in the wake of the crash of '29 to keep the safety&soundness of regulated banking separate from unregulated, risky investment banking), there are now some regulated banking that have merged/acquired investment banking units (that got heavily leveraged into toxic CDOs ... like did the GSEs).

some recent references to some of the process of repealing Glass-Steagall:
https://www.garlic.com/~lynn/2008k.html#36 dollar coins
https://www.garlic.com/~lynn/2008k.html#41 dollar coins

another post in a different thread:
https://www.garlic.com/~lynn/2008g.html#66 independent appraisers

above references Citigroup paid $400mil fine in 2002 and the CEO was forbidden from communicating with various people in the company.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Fraud due to stupid failure to test for negative

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Fraud due to stupid failure to test for negative
Newsgroups: alt.folklore.computers
Date: Fri, 22 Aug 2008 13:30:39 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
business news program just "asked" what did the GSEs do wrong? ... and their immediate answer: they bought $5bil in (toxic) CDOs with $80mil of capital ... i.e. heavily leveraged -- not quite 100times (the potential $25bil bailout estimates for GSEs seems to be all holdings)

re:
https://www.garlic.com/~lynn/2008m.html#12 Fraud due to stupid failure to test for negative
https://www.garlic.com/~lynn/2008m.html#15 Fraud due to stupid failure to test for negative
https://www.garlic.com/~lynn/2008m.html#16 Fraud due to stupid failure to test for negative

any issue about GSEs executives loosing their jobs ... after bragging about how wonderful it was their (heavy leveraged) buying $5bil in toxic CDOs with only $80mil in capital (when similar activity by executives at other institutions were loosing their jobs) ... news today attributes comments by Buffett that if the GSEs weren't gov't backed institutions, they would have already been gone .... his company had been the largest Freddie shareholder around 2000 and 2001, but sold its shares after realizing that both companies were trying "to report quarterly earnings to please Wall Street" ... they needed to keep earnings growing to keep stock market happy and turned to accounting to do it.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

IBM-MAIN longevity

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM-MAIN longevity
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sat, 23 Aug 2008 16:48:13 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:

BITNET    435
ARPAnet  1155
CSnet     104 (excluding ARPAnet overlap)
VNET     1650
EasyNet  4200
UUCP     6000
USENET   1150 (excluding UUCP nodes)

re:
https://www.garlic.com/~lynn/2008l.html#2 IBM-MAIN longevity
https://www.garlic.com/~lynn/2008l.html#6 IBM-MAIN longevity
https://www.garlic.com/~lynn/2008l.html#12 IBM-MAIN longevity

for a little *arpanet* (arpanet pre-tcp/ip made a distinction between the number of network IMP nodes and the number of hosts connected to IMPs) from RFC:


https://www.garlic.com/~lynn/rfcidx4.htm#1296


1296 I
Internet Growth (1981-1991), Lottor M., 1992/01/29 (9pp) (.txt=20103)
 (Refs 921, 1031, 1034, 1035, 1178)

08/81         213      Host table #152
05/82         235      Host table #166
08/83         562      Host table #300
10/84       1,024      Host table #392
10/85       1,961      Host table #485
02/86       2,308      Host table #515
11/86       5,089
12/87      28,174
07/88      33,000
10/88      56,000
01/89      80,000
07/89     130,000
10/89     159,000
10/90     313,000
01/91     376,000
07/91     535,000
10/91     617,000
01/92     727,000

... snip ...

by comparison VNET (internal network hosts and nodes were equivalent):

reference to more than 300 nodes in 1979
https://www.garlic.com/~lynn/2006r.html#7

reference to 1000 nodes in 1983:
https://www.garlic.com/~lynn/internet.htm#22
https://www.garlic.com/~lynn/99.html#112

reference to nodes approaching 2000 in 1985
https://www.garlic.com/~lynn/2006t.html#49

other internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

IBM-MAIN longevity

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM-MAIN longevity
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 24 Aug 2008 08:50:19 -0400
Morten Reistad <first@last.name> writes:
By the time the Internet was commercialised July 1st 1993, there were more than 100 institutions connected, 14 of which didn't pass muster with the AUPs and were fully commercial. The user group uucp service had 520 paying customers, of which around 450 had everyday dialup sessions. By early 1995 there were over 300 leased-line customers, half of which connected over frame relay; and over 10000 dialup accounts.

in '92, got a full usenet satellite feed (in bound) ... in return for doing (sat modem) device drivers for a couple different platforms and an article for (june '93) boardwatch magazine (picture of me in the backyard with the dish). one of the machines was 486 w/dos and waffle. misc. past refs:
https://www.garlic.com/~lynn/2000.html#38 Vanishing Posts...
https://www.garlic.com/~lynn/2000e.html#39 I'll Be! Al Gore DID Invent the Internet After All ! NOT
https://www.garlic.com/~lynn/2001h.html#66 UUCP email
https://www.garlic.com/~lynn/2005l.html#16 Newsgroups (Was Another OS/390 to z/OS 1.4 migration
https://www.garlic.com/~lynn/2006m.html#11 An Out-of-the-Main Activity
https://www.garlic.com/~lynn/2007n.html#17 What if phone company had developed Internet?
https://www.garlic.com/~lynn/2007p.html#16 Newsweek article--baby boomers and computers

dish was significantly smaller than the 4.5m dishes for tdma system on internal network (working with nearly a decade earlier) ... had started with some telco T1 circuits, some T1 circuits on campus T3 collins digital radio (microwave, multiple locations in south san jose) and some T1 circuits on existing C-band system that used 10m dishes (west cost / east coast). then got to work on design of tdma system with 4.5m dishes for Ku-band system and a transponder on sbs-4 (that went up on 41-d, 5sep84). misc. past posts mentioning 41-d:
https://www.garlic.com/~lynn/2000b.html#27 Tysons Corner, Virginia
https://www.garlic.com/~lynn/2002p.html#28 Western Union data communications?
https://www.garlic.com/~lynn/2003j.html#29 IBM 3725 Comms. controller - Worth saving?
https://www.garlic.com/~lynn/2003k.html#14 Ping: Anne & Lynn Wheeler
https://www.garlic.com/~lynn/2004b.html#23 Health care and lies
https://www.garlic.com/~lynn/2004o.html#60 JES2 NJE setup
https://www.garlic.com/~lynn/2005h.html#21 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005q.html#17 Ethernet, Aloha and CSMA/CD -
https://www.garlic.com/~lynn/2006k.html#55 5963 (computer grade dual triode) production dates?
https://www.garlic.com/~lynn/2006m.html#11 An Out-of-the-Main Activity
https://www.garlic.com/~lynn/2006m.html#16 Why I use a Mac, anno 2006
https://www.garlic.com/~lynn/2006p.html#31 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006v.html#41 Year-end computer bug could ground Shuttle
https://www.garlic.com/~lynn/2007p.html#61 Damn

past posts in thread:
https://www.garlic.com/~lynn/2008k.html#81 IBM-MAIN longevity
https://www.garlic.com/~lynn/2008k.html#83 IBM-MAIN longevity
https://www.garlic.com/~lynn/2008k.html#85 IBM-MAIN longevity
https://www.garlic.com/~lynn/2008l.html#0 IBM-MAIN longevity
https://www.garlic.com/~lynn/2008l.html#1 IBM-MAIN longevity
https://www.garlic.com/~lynn/2008l.html#2 IBM-MAIN longevity
https://www.garlic.com/~lynn/2008l.html#3 IBM-MAIN longevity
https://www.garlic.com/~lynn/2008l.html#4 IBM-MAIN longevity
https://www.garlic.com/~lynn/2008l.html#5 IBM-MAIN longevity
https://www.garlic.com/~lynn/2008l.html#6 IBM-MAIN longevity
https://www.garlic.com/~lynn/2008l.html#7 IBM-MAIN longevity
https://www.garlic.com/~lynn/2008l.html#8 IBM-MAIN longevity
https://www.garlic.com/~lynn/2008l.html#9 IBM-MAIN longevity
https://www.garlic.com/~lynn/2008l.html#10 IBM-MAIN longevity
https://www.garlic.com/~lynn/2008l.html#12 IBM-MAIN longevity
https://www.garlic.com/~lynn/2008l.html#13 IBM-MAIN longevity
https://www.garlic.com/~lynn/2008l.html#16 IBM-MAIN longevity
https://www.garlic.com/~lynn/2008l.html#17 IBM-MAIN longevity
https://www.garlic.com/~lynn/2008l.html#19 IBM-MAIN longevity
https://www.garlic.com/~lynn/2008l.html#20 IBM-MAIN longevity
https://www.garlic.com/~lynn/2008m.html#18 IBM-MAIN longevity

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

IBM-MAIN longevity

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM-MAIN longevity
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 24 Aug 2008 12:19:33 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
dish was significantly smaller than the 4.5m dishes for tdma system on internal network (working with nearly a decade earlier) ... had started with some telco T1 circuits, some T1 circuits on campus T3 collins digital radio (microwave, multiple locations in south san jose) and some T1 circuits on existing C-band system that used 10m dishes (west cost / east coast). then got to work on design of tdma system with 4.5m dishes for Ku-band system and a transponder on sbs-4 (that went up on 41-d, 5sep84).

re:
https://www.garlic.com/~lynn/2008m.html#19 IBM-MAIN longevity

other posts mentioning various "HSDT" activities
https://www.garlic.com/~lynn/subnetwork.html#hsdt

in 1980, the STL lab was starting to burst at the seams (it had only opened 4yrs earlier ... dedicated the same week the smithsonian air&space opened) ... and the decision was made to move 300 people from the IMS database group to offsite location. The group looked at using "remote" 3270s into the STL mainframes ... but found the response totally unacceptable. The decision was then made to go with local 3270s at the remote location using HYPERchannel as (mainframe) channel extender ... over a T1 circuit (on the campus T3 collins digital radio serving the area).

I got involved to write the driver support for HYPERchannel. The channel extender support wasn't (totally) software transparent. HYPERchannel had a (remote) A51x channel emulation box that (mainframe) controllers could connect to. Normal mainframe channel operation executed channel programs directly out of mainframe memory. However, the latency over remote connections made this infeasible ... so the mainframe device driver had to scan the channel program and make a emulated copy which was downloaded to the memory of the HYPERchannel A51x box ... and then executed direclty out of A51x memory.

This is analogous to virtual machine operating system has to do scanning channel program and making a shadow copy ... which has real addresses sustituted for the virtual machine's "virtual" addresses. recent discussion (in comp.arch) of virtual machine requirement creating channel program copies
https://www.garlic.com/~lynn/2008m.html#7 Future architectures

shot of 3270 logo screen used:
https://www.garlic.com/~lynn/vmhyper.jpg

3270 logo screen shot

as i've mentioned in the past, there was no noticeable difference in 3270 terminal response ... and overall system thruput actually increased 10-15 percent (the issue being that the HYPERchannel A220 local channel interface had much lower channel busy overhead than the 327x controller boxes ... doing the same operations).

misc past references:
https://www.garlic.com/~lynn/94.html#24 CP spooling & programming technology
https://www.garlic.com/~lynn/96.html#27 Mainframes & Unix
https://www.garlic.com/~lynn/2000b.html#38 How to learn assembler language for OS/390 ?
https://www.garlic.com/~lynn/2000c.html#68 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2001.html#22 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001k.html#46 3270 protocol
https://www.garlic.com/~lynn/2002f.html#7 Blade architectures
https://www.garlic.com/~lynn/2002f.html#60 Mainframes and "mini-computers"
https://www.garlic.com/~lynn/2002g.html#61 GE 625/635 Reference + Smart Hardware
https://www.garlic.com/~lynn/2002i.html#43 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002j.html#67 Total Computing Power
https://www.garlic.com/~lynn/2003g.html#22 303x, idals, dat, disk head settle, and other rambling folklore
https://www.garlic.com/~lynn/2004e.html#33 The attack of the killer mainframes
https://www.garlic.com/~lynn/2004p.html#29 FW: Is FICON good enough, or is it the only choice we get?
https://www.garlic.com/~lynn/2005e.html#13 Device and channel
https://www.garlic.com/~lynn/2005u.html#22 Channel Distances
https://www.garlic.com/~lynn/2005u.html#23 Channel Distances
https://www.garlic.com/~lynn/2006i.html#34 TOD clock discussion
https://www.garlic.com/~lynn/2006u.html#19 Why so little parallelism?

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Fraud due to stupid failure to test for negative

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Fraud due to stupid failure to test for negative
Newsgroups: alt.folklore.computers
Date: Sun, 24 Aug 2008 15:19:40 -0400
jmfbahciv <jmfbahciv@aol> writes:
I saw somebody reference OODA-loops in sci.physics two days ago. He was talking about DARPA and listing goals and actions. I asked him yesterday if he's read about OODA-loops. I'll find out later what his answer is.

for another reference, four part video of Boyd (circa 1990) ... talks about a number of things, review of F15, Toyota system, etc, as some of the issues that I've posted before related to US automobile C4 effort:
http://www.youtube.com/watch?v=Rbb48uUOkqQ
http://www.youtube.com/watch?v=i5c3yMy-llA
http://www.youtube.com/watch?v=q5TTeMCoRhM
http://www.youtube.com/watch?v=Zbbh9bYOOok

there are some others that can be found:
http://www.youtube.com/watch?v=qh0k9kc3EY0

past posts mentioning Boyd
https://www.garlic.com/~lynn/subboyd.html

misc. recent posts mentioning US automobile C4 effort:
https://www.garlic.com/~lynn/2008.html#84 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008.html#85 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008b.html#9 folklore indeed
https://www.garlic.com/~lynn/2008c.html#22 Toyota Beats GM in Global Production
https://www.garlic.com/~lynn/2008c.html#68 Toyota Beats GM in Global Production
https://www.garlic.com/~lynn/2008e.html#30 VMware signs deal to embed software in HP servers
https://www.garlic.com/~lynn/2008e.html#31 IBM announced z10 ..why so fast...any problem on z 9
https://www.garlic.com/~lynn/2008f.html#50 Toyota's Value Innovation: The Art of Tension
https://www.garlic.com/~lynn/2008h.html#65 Is a military model of leadership adequate to any company, as far as it based most on authority and discipline?
https://www.garlic.com/~lynn/2008i.html#31 Mastering the Dynamics of Innovation
https://www.garlic.com/~lynn/2008k.html#2 Republican accomplishments and Hoover
https://www.garlic.com/~lynn/2008k.html#50 update on old (GM) competitiveness thread
https://www.garlic.com/~lynn/2008k.html#58 Mulally motors on at Ford

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Future architectures

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Future architectures
Newsgroups: alt.comp.hardware.pc-homebuilt,comp.arch,sci.electronics.design
Date: Mon, 25 Aug 2008 10:10:07 -0400
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
Aside: does anyone know why the "Harvard" approach was promoted from being a trivial but important variation of Von Neumann to being of equal rank, starting about 20 years ago? Because it assuredly ain't so, despite the nonsense in Wikipedia, and almost all programming languages have used separate code and data "address spaces" since the invention of COBOL and FORTRAN, and were/are always talked about as using the Von Neumann model (as they do).

at the time (in following email), i was still on kick about (the same) shared pages appearing at different virtual addresses in different virtual address spaces (or even the same shared pages appearing at different virtual address in the same virtual address space) ... misc. related posts
https://www.garlic.com/~lynn/submain.html#adcon

from long ago and far away (with regard to 3090):

Date: 11/17/83 13:40:41
To: wheeler

The machine has a split cache, the instruction cache is managed with real addresses. No problems.

The operand cache is managed with two directories: one holds LOGICAL addresses (i.e. mixture of real and virtual), and the other holds real addresses. It appears to the outside world to be managed with real addresses. I can think of no reason why shared pages will be peculiar in this environment.


... snip ... top of post, old email index

related old email about the 3090 cache operation
https://www.garlic.com/~lynn/2003j.html#email831118

in this post, also mentioning 801 (separate I&D cache) from 1975:
https://www.garlic.com/~lynn/2003j.html#42 Flash 10208

this (earlier) email mentions 5880 (Amdahl mainframe clone) having separate I & D caches
https://www.garlic.com/~lynn/2006b.html#email810318
in this post
https://www.garlic.com/~lynn/2006b.html#38 blast from the past ... macrocode

misc. posts mentioning 801 (romp, rios, power/pc, etc).
https://www.garlic.com/~lynn/subtopic.html#801

One of the differences between 801 split cache and the 3090 (5880) split cache ... was that 3090 (& 5880) managed cache consistency (between I & D caches) in hardware ...while 801 required software to flush D-cache & invalidate I-cache (like program loaders which may have modified instruction streams ... in the data cache ... in order to make sure that modifications in the D-cache were correctly reflected in the I-cache instruction stream).

other old email mentioning 801
https://www.garlic.com/~lynn/lhwemail.html#801

semi-related recent post in this thread (discussing virtual memory & paging from the 60s):
https://www.garlic.com/~lynn/2008m.html#7 Future architecture

for related topic drift ... "small" shared segments in ROMP chip (801 used later in PC/RT)
https://www.garlic.com/~lynn/2006y.html#email841114c
https://www.garlic.com/~lynn/2006y.html#email841127

in this post:
https://www.garlic.com/~lynn/2006y.html#36 Multiple mappings

and (this time, Iliad chip ... another 801)
https://www.garlic.com/~lynn/2006u.html#email830420

in this post:
https://www.garlic.com/~lynn/2006u.html#37 To RISC or not to RISC

similar post along this line
https://www.garlic.com/~lynn/2007f.html#22 The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2008j.html#82 Taxes

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Blinkylights

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Blinkylights
Newsgroups: alt.folklore.computers
Date: Mon, 25 Aug 2008 10:43:30 -0400
Greg Menke <gusenet@comcast.net> writes:
If you say so.. Analyzers, scopes and software tools make dedicated blinky led's basically irrelevant for troubleshooting.

semi-related post mentioning bit-error-testers on link:
https://www.garlic.com/~lynn/2008l.html#16 IBM-MAIN longevity
https://www.garlic.com/~lynn/2008l.html#17 IBM-MAIN longevity
https://www.garlic.com/~lynn/2008l.html#19 IBM-MAIN longevity

the "real" indicator (since all the links required link encryptors) was when the sync light went out on the link encryptors. getting link encryptors back in sync was more painful than simply resending block-in-error ... part of the motivation for 1) FEC (forward error encrypting) and 2) transition away from link encryptors to (strong) packet encryption. other motivation was a lot of money was being spent on link encryptors (circa 85/86, there was some comment that the internal network had over half of all the link encryptors in the world).

old email mentioning PGP-like public key encryption
https://www.garlic.com/~lynn/2006w.html#email810515

in this post
https://www.garlic.com/~lynn/2006w.html#12 more secure communication over the network

other old email mentioning public key and/or crypto
https://www.garlic.com/~lynn/lhwemail.html#crypto

recent crypto related thread drift:
https://www.garlic.com/~lynn/2008h.html#87 New test attempt
https://www.garlic.com/~lynn/2008i.html#86 Own a piece of the crypto wars
https://www.garlic.com/~lynn/2008j.html#43 What is "timesharing" (Re: OS X Finder windows vs terminal window weirdness)

and for other drift, some old posts mentioning working with cyclotomics regarding FEC:
https://www.garlic.com/~lynn/2001.html#1 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
https://www.garlic.com/~lynn/2002p.html#53 Free Desktop Cyber emulation on PC before Christmas
https://www.garlic.com/~lynn/2003e.html#27 shirts
https://www.garlic.com/~lynn/2004f.html#37 Why doesn't Infiniband supports RDMA multicast
https://www.garlic.com/~lynn/2004o.html#43 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2005n.html#27 Data communications over telegraph circuits
https://www.garlic.com/~lynn/2007.html#29 Just another example of mainframe costs
https://www.garlic.com/~lynn/2007j.html#4 Even worse than UNIX
https://www.garlic.com/~lynn/2007v.html#82 folklore indeed
https://www.garlic.com/~lynn/2008l.html#19 IBM-MAIN longevity

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Some confusion about virtual cache

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Some confusion about virtual cache
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 25 Aug 2008 13:30:52 -0400
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
Sorry - I wasn't clear. I didn't mean that the segment could be only read or written, but that stores and fetches of parts of it (e.g. words) would be atomic. It would have full memory semantics, but would not be as fast as unshared memory, even in the absence of any clashes.

801/risc philosiphy was solidly opposite of supporting cache consistency and smp operation/configruaiton. in the late 80s, there was a four processor "single chip rios" effort done that had flag for segments that would bypass cache (i.e. data in segments identified as "non-cached" would have memory load&stores that would bypass caching). standard application data ... either non-shared and/or r/o shared ... would have segments identified as cache'able ... but data requiring multiprocessing serialization ... would be positioned in segments identified as non-cached.

we had done something analogous for 370 16-way SMP more than a decade earlier (that didn't ship as a product).

another example of restructuring data (for 801/risc rios) was the aix journaled filesystem ... where all the unix filesystem metadata was collected in storage area that was flagged as "transaction" memory i.e. allowed identifying changed/modified filesystem metadata for logging/journaling ... w/o requring explicit logging calls whenever there was modification of transaction data.

misc. past posts mentioning 801, risc, romp, rios, power, power/pc, etc
https://www.garlic.com/~lynn/subtopic.html#801

misc. past posts mentioning "live oak" (four processor, single-chip rios)
https://www.garlic.com/~lynn/2000c.html#21 Cache coherence [was Re: TF-1]
https://www.garlic.com/~lynn/2001j.html#17 I hate Compaq
https://www.garlic.com/~lynn/2003d.html#57 Another light on the map going out
https://www.garlic.com/~lynn/2004q.html#40 Tru64 and the DECSYSTEM 20
https://www.garlic.com/~lynn/2006w.html#40 Why so little parallelism?
https://www.garlic.com/~lynn/2006w.html#41 Why so little parallelism?

some of the above makes reference to the ("alternative") cluster approach ... trying to heavily leverage commodity priced components (w/o cache consistency ... that was eventually announced as the corporate supercomputer) ... misc. old email
https://www.garlic.com/~lynn/lhwemail.html#medusa

for other topic drift
https://www.garlic.com/~lynn/2008m.html#22 Future architectures

... reference more detailed 3090 cache description ... has a small "fast" logical (aka virtual) index ... that was kept consistent with the larger real index
https://www.garlic.com/~lynn/2003j.html#email831118
in this post
https://www.garlic.com/~lynn/2003j.html#42 Flash 10208

the 370 16-way SMP effort in the mid-70s ... leveraged charlie's invention of the compare&swap instruction ("CAS" was chosen because they are charlie's initials) ... misc. past posts mentioning SMP and/or compare&swap instruction
https://www.garlic.com/~lynn/subtopic.html#smp

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Taxes

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Taxes
Newsgroups: alt.folklore.computers
Date: Mon, 25 Aug 2008 14:03:55 -0400
Peter Flass <Peter_Flass@Yahoo.com> writes:
Government is in the business of seeing how far it can milk the taxpayers without removing vital body parts in the process.

Are Your Tax Dollars Paying for Excessive CEO Salaries?
http://www.consumeraffairs.com/news04/2008/08/ceo_taxpayers.html

the author was interviewed on tv business news show this morning along with a lobbyist. the author used the line that there are some secretaries (in financial institutions) paying a higher tax rate than their CEO bosses. The lobbyist attempted to position the argument in terms of CEOs reasonably should have larger salaries than secretaries ... obfuscating the reference to "loop-holes" congress have passed that allows CEOs to have a lower tax rate. It wasn't directly the size of the salary ... but it could be reasonably expected to see both at least having the same tax rate.

this is separate to past references regarding executives now have a salary ratio that is 400:1 that of standard workers ... up from ratio of 20:1 ... and much more than then 10:1 found in other cultures/countries
https://www.garlic.com/~lynn/2008i.html#73 Should The CEO Have the Lowest Pay In Senior Management?
https://www.garlic.com/~lynn/2008j.html#24 To: Graymouse -- Ireland and the EU, What in the H... is all this about?
https://www.garlic.com/~lynn/2008j.html#76 lack of information accuracy
https://www.garlic.com/~lynn/2008k.html#71 Cormpany sponsored insurance

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Fraud due to stupid failure to test for negative

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Fraud due to stupid failure to test for negative
Newsgroups: alt.folklore.computers
Date: Mon, 25 Aug 2008 14:54:45 -0400
jmfbahciv <jmfbahciv@aol> writes:
Getting their names and trying to follow where they go would be a wise way of predicting where the next mess will happen in 8-10 years. There's a guy, whose first name is Sandy and I can't remember his last, who seems to get in the middle of messes. I have not determined if he is an attractor or a catalyst yet.

re:
https://www.garlic.com/~lynn/2008m.html#12 Fraud due to stupid failure to test for negative
https://www.garlic.com/~lynn/2008m.html#15 Fraud due to stupid failure to test for negative
https://www.garlic.com/~lynn/2008m.html#16 Fraud due to stupid failure to test for negative

i've noted before about past comment (on one of the tv business news shows) regarding Bernanke's litany about needing new regulations ... that american bankers are the most inventive in the world and they've managed to totally screwup the system at least once a decade regardless of the measures put in place attempting to prevent it:
https://www.garlic.com/~lynn/2008h.html#90 subprime write-down sweepstakes
https://www.garlic.com/~lynn/2008i.html#30 subprime write-down sweepstakes
https://www.garlic.com/~lynn/2008i.html#77 Do you think the change in bankrupcy laws has exacerbated the problems in the housing market leading more people into forclosure?

Looking at various recent articles ... there are a couple of items that I found (interesting?):

1) claims that the current credit problem was because (toxic) CDOs were too hard to evaluate

2) that wall street doesn't see the enormous profits going into the future (that they saw in the earlier part of this decade by heavily leveraging toxic CDOs).

wallstreet supposedly had the creme de la creme of financial experts, earning enormous compensation (took in well over hundred billion in just bonuses in 2002-2007 period) ... and they supposedly weren't able to figure out that trillions of dollars in poor quality (&/or subprime) mortgages were disappearing and then reappearing as triple-A rated toxic CDOs.

assuming purely random difficulty with evaluating (triple-A rated) toxic CDOs ... then there should be as much under-evaluation as there was over-evaluation ... implying that there would be as much "write-ups" (i.e. selling toxic CDOs at 200percent profit) as there are "write-downs" (selling toxic CDOs at 22cents on the dollar ... eventually there will possibly be $1tril - $2tril in write-downs)

an alternative interpretation was that (triple-A rated) toxic CDOs were being used just like toxic CDOs were used two decades ago during the S&L crisis to unload property at significant higher value (people selling the toxic CDOs understood the value, leveraging toxic CDOs so that the buyers would pay a much higher premium ... obfuscating the actual underlying value).

as to profit/earnings ... from an institutional standpoint, it would look like the profits of a couple yrs ago ... are turning out actually to be enormous losses (to the institution, it isn't likely the responsible individuals are going to return their salaries and bonuses).

long-winded, decade old post discussing various things ... including needing visiability into underlying values of CDO-like instruments
https://www.garlic.com/~lynn/aepay3.htm#riskm

misc. past posts mentioning the $137bil in wall street bonuses for 2002-2007:
https://www.garlic.com/~lynn/2008f.html#76 Bush - place in history
https://www.garlic.com/~lynn/2008f.html#95 Bush - place in history
https://www.garlic.com/~lynn/2008g.html#32 independent appraisers
https://www.garlic.com/~lynn/2008g.html#52 IBM CEO's remuneration last year ?
https://www.garlic.com/~lynn/2008g.html#66 independent appraisers
https://www.garlic.com/~lynn/2008h.html#42 The Return of Ada
https://www.garlic.com/~lynn/2008i.html#73 Should The CEO Have the Lowest Pay In Senior Management?
https://www.garlic.com/~lynn/2008j.html#3 dollar coins
https://www.garlic.com/~lynn/2008j.html#24 To: Graymouse -- Ireland and the EU, What in the H... is all this about?
https://www.garlic.com/~lynn/2008j.html#75 lack of information accuracy
https://www.garlic.com/~lynn/2008k.html#11 dollar coins

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Fraud due to stupid failure to test for negative

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Fraud due to stupid failure to test for negative
Newsgroups: alt.folklore.computers
Date: Mon, 25 Aug 2008 16:24:05 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
long-winded, decade old post discussing various things ... including needing visiability into underlying values of CDO-like instruments
https://www.garlic.com/~lynn/aepay3.htm#riskm


re:
https://www.garlic.com/~lynn/2008m.html#26 Fraud due to stupid failure to test for negative

for other recent news tidbits ...

Report: FBI saw mortgage crisis coming in '04
http://latimesblogs.latimes.com/laland/2008/08/report-fbi-saw.html Anyone smell a stench? The FBI knew about the housing scams in 2004
http://www.digitaljournal.com/article/258992
FBI saw threat of mortgage crisis
http://www.latimes.com/business/la-fi-mortgagefraud25-2008aug25,1,4792318.story

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Yet another squirrel question - Results (very very long post)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Yet another squirrel question - Results (very very long post)
Newsgroups: alt.folklore.computers
Date: Tue, 26 Aug 2008 09:32:46 -0400
Louis Krupp <lkrupp@pssw.nospam.com.invalid> writes:
UNIX was an emotional subject back in the day. It got a little bit of time at DECUS, but not a lot, although that may have changed since I stopped going in 1985 or so. Back at the ranch (I was working for a university), academics liked UNIX, but computing center staff (and, I suspect, most DEC employees) were happy with VMS. One of my coworkers derided VMS as a "1960s style OS," but when logins to UNIX took forever because it was doing a linear search through /etc/passwd, he explained that UNIX hadn't been intended to support lots of users. (This was in the early 80's, and I don't remember which version of UNIX we were using. We may have been running it on a PDP/11. I remember the screen editor by Interactive Systems -- ined or something.) As would have been expected, arguments about the relative values of operating systems generated more heat than light.

something similar slightly more than a decade later with growing loads on webservers (and other servers) and linear search of FINWAIT. There was assumptions about "sessions" being long-running and very few sessions would ever be in the process of being closed. The use of TCP (by HTTP) violated assumptions about sessions (in part because HTTP wasn't a session oriented protocol). there was period when lots of webservers found themselves spending 90-95% of their cpu running FINWAIT list.

advent of web usage was affecting other session protocols also. i've mentioned being called in to consult with small client/server startup that wanted to do payment transactions ... which is frequently now referred to as electronic commerce ... misc. past posts mentioning part of that infrastructure called payment gateway
https://www.garlic.com/~lynn/subnetwork.html#gateway

they had growing number of FTP (download) servers ... started out people "purchasing" the browser and downloading. this was before front-end boundary routers doing load-balancing routing of incoming transactions to pool of backend servers (first saw being developed and deployed at Google). (Growing number of ) Server names were qualified with numeric suffix; 1, 2, ... 10, etc. The last one I remember was large sequent box (I think given "20" suffix in the server name). The sequent people said that they had previously had to deal with large number of unix scale-up issues ... having commercial customers with heavy loads ... things like 20,000 telnet sessions.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Quality of IBM school clock systems?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Quality of IBM school clock systems?
Newsgroups: alt.folklore.computers
Date: Tue, 26 Aug 2008 10:14:04 -0400
Roland Hutchinson <my.spamtrap@verizon.net> writes:
People with absolute pitch have to _learn_ relative pitch just like the rest of us in the course of musical training. It's just a very different task for them to learn it, since they have a great deal of interference from their absolute pitch sense, and sometimes it is neglected in their training because they can go a long ways without it in tasks (such as dictation and score reading) that others often have great difficulty with.

news item from today

'Perfect Pitch' In Humans Far More Prevalent Than Expected
http://www.sciencedaily.com/releases/2008/08/080826080600.htm

from above:
Humans are unique in that we possess the ability to identify pitches based on their relation to other pitches, an ability called relative pitch. Previous studies had shown that animals such as birds, for instance, can identify a series of repeated notes with ease, but when the notes are transposed up or down even a small amount, the melody becomes completely foreign to the bird.

... snip ...

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Taxes

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Taxes
Newsgroups: alt.folklore.computers
Date: Tue, 26 Aug 2008 11:41:34 -0400
greymaus <greymausg@mail.com> writes:
I suppose, not being in the US. There were a whole lot of really stupid ideas (seen from now) in most of the affected countries, one of the worse was something called CFD here, which ended up costing one man over a billion euros (so far). Greed, eating into the social cohesion of countries like acid.

Re: Knowing that the whole lot was going to collapse, I have been telling people that for years, but was described as being jealous of the success of the con men, now some of the victims hate me more than the ones that swindled them. Cassandra wasn't popular either.


re:
https://www.garlic.com/~lynn/2008m.html#25 Taxes

old line about being told that they could have forgiven you for being wrong, but they were never going to forgive you for being right ... a few past references:
https://www.garlic.com/~lynn/2002k.html#61 arrogance metrics (Benoits) was: general networking
https://www.garlic.com/~lynn/2002q.html#16 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2003i.html#71 Offshore IT
https://www.garlic.com/~lynn/2004k.html#14 I am an ageing techy, expert on everything. Let me explain the
https://www.garlic.com/~lynn/2007.html#26 MS to world: Stop sending money, we have enough - was Re: Most ... can't run Vista
https://www.garlic.com/~lynn/2007e.html#48 time spent/day on a computer
https://www.garlic.com/~lynn/2007k.html#3 IBM Unionization
https://www.garlic.com/~lynn/2007r.html#6 The history of Structure capabilities
https://www.garlic.com/~lynn/2008c.html#34 was: 1975 movie "Three Days of the Condor" tech stuff

the other line from dedication Boyd Hall at USAF weapons school:
https://www.garlic.com/~lynn/2000e.html#35 War, Chaos, & Business (web site), or Col John Boyd
https://www.garlic.com/~lynn/2007.html#20 MS to world: Stop sending money, we have enough - was Re: Most ... can't run Vista
https://www.garlic.com/~lynn/2007h.html#74 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007j.html#61 Lean and Mean: 150,000 U.S. layoffs for IBM?
https://www.garlic.com/~lynn/2007j.html#77 IBM Unionization
https://www.garlic.com/~lynn/2007k.html#3 IBM Unionization
https://www.garlic.com/~lynn/2007k.html#5 IBM Unionization
https://www.garlic.com/~lynn/2007n.html#4 the Depression WWII
https://www.garlic.com/~lynn/2007n.html#44 the Depression WWII
https://www.garlic.com/~lynn/2008b.html#45 windows time service

other posts referencing Boyd
https://www.garlic.com/~lynn/subboyd.html

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Baudot code direct to computers?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Baudot code direct to computers?
Newsgroups: alt.folklore.computers
Date: Tue, 26 Aug 2008 16:31:16 -0400
hancock4 writes:
For a great many years the five-bit Baudot code was for data transmission. When computers or tab machines would communicate, data would be punched on cards, converted to Baudot tape, transmitted, and then the process reversed. IBM had a machine to convert from Baudot to Hollerith and vice versa (I assume other vendors did as well).

In the early 1960s ASCII was developed which computers could use directly without a separate tape conversion process. That allowed Teletypewriters to act as terminals to a computer in an active on-line real time environment. [This is a very simplistic summary.] A very popular computer terminal was the Teletype model 33. This was an ASCII machine.

Were there computers that supported direct Baudot connections, either as one way (e.g. broadcasting messages) or two way (on line inquiry)?

I believe a very early Bell Labs computer (circa 1939) used Baudot TTYs as their terminal in a real time set up.


lots of Series/1s ... from long ago and far away:

Date: 12 April 1985, 20:07:33 EST
To: wheeler

Hi!
...

The feed pipe is now 14.4kbps async. 5 bit Baudot.

They are going to convert but WE feel it might not be on the same schedule as we are, sooo.. we feel the Series/1 is required to convert the 5bit Baudot to ASCII until the other system is complete.


... snip ... top of post, old email index

later that year ...
http://query.nytimes.com/gst/fullpage.html?res=9F06E7D9133BF931A2575AC0A963948260

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

IBM THINK original equipment sign

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM THINK original equipment sign
Newsgroups: bit.listserv.ibm-main
Date: Wed, 27 Aug 2008 09:26:45 -0400
ibm-main@TPG.COM.AU (Shane) writes:
When I was at Amdahl, the tech services manager of one of (the ???) biggest Aussie customers had a pretty good collection of vendor mugs. He made a point of ensuring vendors got a competitors mug for coffee. Lots of fun around tender time - the "out-of-town" hot-shot salesmen didn't know which way to look when he made them a brew. Especially in a multi-vendor briefing .... ;-)

Ah ... thems were the days.


when my brother was regional marketing rep for Apple (largest physical region in conus) ... he would almost fawn over much he liked other vendor (IBM) coffee mugs and would offer to trade Apple mugs if he could have those really neat mugs.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Taxes

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Taxes
Newsgroups: alt.folklore.computers
Date: Wed, 27 Aug 2008 09:55:01 -0400
jmfbahciv <jmfbahciv@aol> writes:
I don't understand this one. I haven't seen anything on the income tax forms that would support this claim on pure income. Are they talking about the capital gains 10% rate?

re:
https://www.garlic.com/~lynn/2008m.html#25 Taxes

the referenced article
http://www.consumeraffairs.com/news04/2008/08/ceo_taxpayers.html

lists $20billion in executive compensation tax loop-holes. the article does contribute to confusing effective tax rate (i.e. actual tax paid divided by total compensation) by mentioning "encourage excessive executive pay".

Lower effective tax rate (because of tax loop-holes) is separate issue from the dramatic change in ratio of executive pay to worker pay ... exploding from ratio of 20:1 to 400:1 (compared to ratio of 10:1 in most of the rest of the world).

One could make the case that with a lower effective tax rate (than workers) ... that the effective ratio of executive pay to worker pay is actually larger than the gross (before tax) ratio (if the 400:1 ratio is gross before tax compensation ... might the after tax compensation ratio be more like 500:1?).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Future architectures

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Future architectures
Newsgroups: alt.comp.hardware.pc-homebuilt,comp.arch,sci.electronics.design,alt.folklore.computers
Date: Wed, 27 Aug 2008 10:37:00 -0400
rpw3@rpw3.org (Rob Warnock) writes:
Well, you couldn't tell it by me!! ;-} I started coding in 1965, and *none* of the machines I learned on[1] had *any* caches yet, not even the venerable DEC PDP-10 (KA10) we got in 1970 (FCS Sep. 1967) -- and in those days the -10 was used for quite significant timesharing loads! Not until the KL10 (FCS June 1975) did the PDP-10 series get any cache at all.[2]

look a virtual memory systems from 60s ... cp40 (on 360/40 with custom virtual memory hardware) and cp67 (on 360/67 that came standard with virtual memory) the size of real storage and the relative page-miss latency to paging drum (in processor cycles) is comparable to modern processor caches and relative cache-miss latency to memory. somewhat related earlier post in this thread
https://www.garlic.com/~lynn/2008m.html#1 Future architectures

besides the mentioned paging algorithm work as undergraduate in the 60s, i had also done a lot of scheduling algorithm and other performance related work (all of it shipping in cp67 product). in the (simplification) morph from cp67 to vm370 ... a lot of that work was dropped.

i had moved a lot of the work (that had been dropped in the morph) to vm370 and made it available in internally distributed systems ... some recent posts with references
https://www.garlic.com/~lynn/2008l.html#72 Error handling for system calls
https://www.garlic.com/~lynn/2008l.html#82 Yet another squirrel question

when the future system project failed
https://www.garlic.com/~lynn/submain.html#futuresys

there was something of a mad rush to get stuff back into the 370 product pipeline (which had been neglected ... some assumptions that future system would replace 370). this was possibly some of the motivation to pickup & release much of the stuff that I had been doing (during the future system period). some recent references:
https://www.garlic.com/~lynn/2008m.html#1 Yet another squirrel question
https://www.garlic.com/~lynn/2008m.html#10 Unbelievable Patent for JCL

one of the features that I had added with moving a lot of my stuff from cp67 to vm370 ... was some scheduling cache optimization (with the increasing use of caches on 370 processors). Nominally, system was enabled for (asynchronous) i/o interrupts ... which can have lot of downside pressure on cache hit ratio. The scheduler would look at relative i/o interrupt rates ... and change from general enabled for i/o interrupts to mostly disabled for i/o interrupts with periodic check for pending i/o interrupts. This traded off cache-hit performance against i/o service time latency.

for other topic drift ... there was survey of some number of operations during the summer of '81 (which included some KL10 and Vax systems) this post has some excerpts from that survey (along with some comments about time-sharing comparison between cp67 and some KL10 systems):
https://www.garlic.com/~lynn/2001l.html#61
other posts with other excerpts from that survey
https://www.garlic.com/~lynn/2006n.html#56
https://www.garlic.com/~lynn/2008m.html#11

other past posts mentioning scheduling/performance work
https://www.garlic.com/~lynn/subtopic.html#fairshare
other past posts mentioning paging algorithm work
https://www.garlic.com/~lynn/subtopic.html#wsclock

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

IBM THINK original equipment sign

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM THINK original equipment sign
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 28 Aug 2008 11:00:46 -0400
sebastian@WELTON.DE (Sebastian Welton) writes:
I have an original IBM Thinkpad. This is a small brown pocket notepad with the word 'THINK' printed on the front and 'IBM' on the back (pn 520-6430 nad 520-6431) still with the original paper pad inside but I think I'll keep it as its quite amusing showing people.

I have a couple of the brown pocket notepads ... but i also have (round, clear, globe):
https://www.garlic.com/~lynn/vnet1000.jpg

1000th node globe

it has gotten a little dinged over the years.

the internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

was larger than arpanet/internet from just about the beginning until sometime possibly late '85 or early '86.

past reference mentioning the 1000th node
https://www.garlic.com/~lynn/99.html#112
https://www.garlic.com/~lynn/internet.htm#22

another post mentioning corporate locations that added one or more new hosts/nodes on the internal network that year
https://www.garlic.com/~lynn/2006k.html#8 Arpa address

the internal network was originally developed at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

the same place that originated virtual machines, GML, lots of interactive stuff.

for recent slightly related networking post about a couple yrs earlier (1980) ... 300 people from the IMS group having to be moved to offsite location ... because STL had filled up (includes screen shot of the 3270 logon logo):
https://www.garlic.com/~lynn/2008m.html#20 IBM-MAIN longevity

One of the interesting aspects of the internal network implementation was that it effectively had a form of gateway implementation in every node. this became important when interfacing with hasp/jes networking implementations.

part of the issue was that hasp/jes networking started off defining nodes using spare slots in the 255-entry table for psuedo (unit record) devices ... typical hasp/jes might have only 150 entries available for defining network nodes. hasp/jes implementation also had a habit of discarding traffic where the originating node and/or the destination node wasn't in its internal table. the internal network quickly exceeded the number of nodes that could be defined in hasp/jes ... and its proclivity for discarding traffic ... pretty much regulated hasp/jes to boundary nodes. by the time hasp/jes got around to increasing the limit to 999 nodes ... the internal network was already over 1000 nodes ... and by the time it was further increased to 1999 nodes ... the internal network was over 2000 nodes.

hasp/jes implementation also had a design flaw where the network information was intermingled with other hasp/jes processing control information (as opposed to clean separation). the periodic outcome that two has/jes systems at different release levels were typically unable to communicate ... and in some cases, release incompatibilities could cause other hasp/jes systems to crash (there is infamous scenario where a san jose hasp/jes system was crashing Hursley hasp/jes systems).

The combination of the internal networking support started accumulating some number of "release-specific" hasp/jes "drivers" ... where an intermediate internal network node was configured to start the corresponding hasp/jes driver for the system on the other end of the wire. As the problems with release incompatibilities between hasp/jes systems increased ... the internal network code evolved a canonical hasp/jes representation ... and drivers would translate format to the specific hasp/jes release (as appropriate). In the hursley crashing scenario ... somebody even got around to blaming the internal network code for not preventing a san jose hasp/jes systems from crashing Hursley hasp/jes systems.

By the time, BITNET started
https://www.garlic.com/~lynn/subnetwork.html#bitnet

they had pretty much eliminated shipping native drivers ... just the hasp/jes compatible drivers ... even tho the native drivers were much more efficient and had higher thruput than the hasp/jes drivers ... although the native drivers did continue to be used on the internal network (note these were NOT SNA).

misc. past posts mentioning hasp/jes (including hasp/jes networking support)
https://www.garlic.com/~lynn/submain.html#hasp

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

IBM THINK original equipment sign

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM THINK original equipment sign
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 28 Aug 2008 14:40:33 -0400
re:
https://www.garlic.com/~lynn/2008m.html#35 IBM THINK original equipment sign

and for something different ... a 2741 APL typeball
https://www.garlic.com/~lynn/aplball.jpg

2741 apl typeball


https://www.garlic.com/~lynn/aplball2.jpg

2741 apl typeball

the science center (besides virtual machines, gml, a lot of online, interactive stuff, timesharing, performance work, monitoring, profiling, early work that led to capacity planning, etc)
https://www.garlic.com/~lynn/subtopic.html#545tech

also had taken apl\360 and ported it to cms ... which was released as cms\apl ... did a lot of work on apl storage management as part of transitioning from a small (16k-32k byte) workspace real-storage swapped environment to a large (up to 16mbyte) workspace virtual storage paged environment. there was also work done allowing apl access to system resources like files and external data.

having "large" workspaces and ability to access files and other system facilities enabled a much broader variety of real-world applications. one such was the business planners in corporate hdqtrs ... had the most sensitive of corporate information (detailed customer data) loaded on the cambridge system ... and they accessed the cambridge cp67 system remotely from corporate hdqtrs ... for the development and execution of business models (type of thing that is now frequently done with spreadsheets).

this required some amount of attention to security details ... since the cambridge cp67 system also was used by non-employees from various educational institutions in the boston area (and as mentioned in my signature line ... i've had online home access since Mar70).

also as mentioned in this recent post
https://www.garlic.com/~lynn/2008m.html#1

in the wake of 23jun69 unbundling announcement ... HONE (Hands-On Network Environment) started out being a number of cp67 virtual machines datacenters to give branch office SEs remote access keeping up their skills/practice with operating systems. misc. past posts:
https://www.garlic.com/~lynn/subtopic.html#hone

however, somewhat in parallel, some number of CMS\APL based sales & marketing applications were being develop ... and they eventually come to dominate all HONE use ... and the original virtual machine purpose dwindled away.

CMS\APL (from the cambridge science center on cp67 cms) was eventually replaced with APL\CMS (from the palo alto science center on vm370 cms ... PASC also did the apl\cms 370/145 microcode assist).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Baudot code direct to computers?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Baudot code direct to computers?
Newsgroups: alt.folklore.computers
Date: Fri, 29 Aug 2008 13:25:52 -0400
Michael Black <et472@ncf.ca> writes:
Would they have already had a character generator ROM for the mainframe work?

They wanted to use parts that were readily available, rather than making them from scratch. If they'd already been making an EBCDIC character generator ROM, then that would fit since it didn't require that it be specially built for the project.

Of course, if they'd used EBCDIC, then that would have meant off the shelf printers (at least cheap off the shelf printers) weren't usable, and that likely affected the decision. They wanted to fit into the world they were moving into, not set some "standard" that everyone would have to move to.


note that prior to the pc ... there was 3101 (glass teletype) which also had an available printer ... part of the move into lower cost terminal market ... there wasn't any 3270 &/or SNA devices in that price range.

3101 did support local buffer and "block mode" operation. for the home terminal program ... there was an implementation for (fullscreen) 3270 emulation with 3101 block mode ... and stuff with optimized screen updates (was some of the data already on the screen ... but at a different position ... that could just involve some shuffling).

the home terminal 3270 emulation optimization got significantly fancier when PCs started replacing 3101s ... lots of compression and other encoding ... also relying on much bigger buffer of previously transmitted data at the PC ... so control might just indicate display stuff that was already someplace in the PC buffer.

I thot i had picture at home of 3101 ... but could only find picture with cdi miniterm (and microfiche viewer) from '79 ... and then later a PC with printer and two monitors (but no picture from period between the two with 3101).

old email mentioning 3101 and/or topaz
https://www.garlic.com/~lynn/2006y.html#email791011
https://www.garlic.com/~lynn/2006y.html#email791011b
https://www.garlic.com/~lynn/2006y.html#email800301
https://www.garlic.com/~lynn/2006y.html#email800311
https://www.garlic.com/~lynn/2006y.html#email800312
https://www.garlic.com/~lynn/2006y.html#email800314
https://www.garlic.com/~lynn/2006y.html#email810820

in these posts
https://www.garlic.com/~lynn/2006y.html#0 Why so little parallelism?
https://www.garlic.com/~lynn/2006y.html#4 Why so little parallelism?
https://www.garlic.com/~lynn/2006y.html#31 "The Elements of Programming Style"

for other topic drift ... these old emails mentioned getting APL character set support on TOPAZ
https://www.garlic.com/~lynn/2006y.html#email791011
https://www.garlic.com/~lynn/2006y.html#email800301

recent APL related post with pictures of 2741 APL type ball
https://www.garlic.com/~lynn/2008m.html#36 IBM THINK original equipment sign

note that 2741 ternminal wasn't EBCDIC in the sense that EBCDIC bytes could be transmitted down the wire ... 2741 terminals required incoming/outgoing translate tables just like ASCII terminals required incoming/outgoing translate tables ... while undergraduate in the 60s, i had added tty/ascii terminal support to cp67 (existing 2741 & 1052). misc. past posts mentioning that eventually leading to doing clone controller project
https://www.garlic.com/~lynn/submain.html#360pcm


https://www.multicians.org/mga.html
from above:
2741

IBM terminal with Selectric mechanism, came after the 1050. Smaller desk, no card reader option, no control switches. Weighed about 200 pounds and cost as much as a new Buick. Used a device dependent 6-bit character set related to BCD, but with shift codes to access the larger character set. Transmission speed was 134.5 baud. Widely used on CTSS and Multics; we used the 963 typeball, which was closest to ASCII. 2741s with the 938 "correspondence" ball were supported too; when you dialed up, dialup_ printed a special message in both dialects: one would be gibberish and the other legible, and if you typed login it chose one translation and if you typed kigub it switched to the other translation and assumed you had typed login. The effect was that you typed "login" and the system detected your character code and logged you in. There were also two special pre-login commands, 963 and 938 that would set up the TTYDIM to understand your typing. The code was simpler because the numbers were the same in both encodings. Most MIT 2741s used tractor-fed paper slightly narrower than printer paper.


... snip ...

misc. other posts mentioning topaz/3101:
https://www.garlic.com/~lynn/99.html#69 System/1 ?
https://www.garlic.com/~lynn/2000g.html#17 IBM's mess (was: Re: What the hell is an MSX?)
https://www.garlic.com/~lynn/2001b.html#12 Now early Arpanet security
https://www.garlic.com/~lynn/2001b.html#13 Now early Arpanet security
https://www.garlic.com/~lynn/2001h.html#32 Wanted: pictures of green-screen text
https://www.garlic.com/~lynn/2001m.html#1 ASR33/35 Controls
https://www.garlic.com/~lynn/2001m.html#54 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2003c.html#34 difference between itanium and alpha
https://www.garlic.com/~lynn/2003c.html#35 difference between itanium and alpha
https://www.garlic.com/~lynn/2003n.html#7 3270 terminal keyboard??
https://www.garlic.com/~lynn/2004e.html#8 were dumb terminals actually so dumb???
https://www.garlic.com/~lynn/2005p.html#28 Canon Cat for Sale
https://www.garlic.com/~lynn/2005r.html#12 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2006n.html#56 AT&T Labs vs. Google Labs - R&D History
https://www.garlic.com/~lynn/2006y.html#24 "The Elements of Programming Style"
https://www.garlic.com/~lynn/2007e.html#15 The Genealogy of the IBM PC
https://www.garlic.com/~lynn/2007h.html#39 sizeof() was: The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007k.html#40 DEC and news groups
https://www.garlic.com/~lynn/2007s.html#48 ongoing rush to the new, 40+ yr old virtual machine technology
https://www.garlic.com/~lynn/2007t.html#74 What do YOU call the # sign?
https://www.garlic.com/~lynn/2008l.html#79 Book: "Everyone Else Must Fail" --Larry Ellison and Oracle ???

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Baudot code direct to computers?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Baudot code direct to computers?
Newsgroups: alt.folklore.computers
Date: Fri, 29 Aug 2008 13:59:47 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
I thot i had picture at home of 3101 ... but could only find picture with cdi miniterm (and microfiche viewer) from '79 ... and then later a PC with printer and two monitors (but no picture from period between the two with 3101).

re:
https://www.garlic.com/~lynn/2008m.html#37 Baudot code direct to computers?

... haven't found a 3101 picture as of yet ...

(ascii) cdi miniterm 77-79 (compact microfiche viewer to the left)
https://www.garlic.com/~lynn/miniterm.jpg

home miniterm

later same desk with PC
https://www.garlic.com/~lynn/homepc.jpg

home pc

original pc with two internal 40track diskette drives (cover over the drive area was to help reduce noise), printer, color monitor, b&w monitor ... just to the left of the support are two (external) half-height teac 80track diskette drives

earlier (prior to 77) had a (real) 2741 at home.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Baudot code direct to computers?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Baudot code direct to computers?
Newsgroups: alt.folklore.computers
Date: Fri, 29 Aug 2008 19:22:47 -0400
Jim Haynes <haynes@localhost6.localdomain6> writes:
The IBM PC era was a long time after the time I was referring to when I said IBM did everything possible to sink ASCII. I'm talking about the 1962-1970 time frame when IBM brought out EBCDIC and hoped that by the force of their market dominance it would prevail over ASCII.

note that 360s had an ascii mode bit in the PSW ... it did disappear with transition to 370 (i don't know any use of 360).

360 PSW format (bit 12 ascii/ebcdic mode bit):
http://www.cs.clemson.edu/~mark/syscall/s360.html

also 360 principles of operation:
http://www.bitsavers.org/pdf/ibm/360/poo/

I was never really clear how it really changed the operations of the machine.

as referenced
https://www.garlic.com/~lynn/2008m.html#37 Baudot code direct to computers?

initial cp67 installed at the university had support for 2741 and 1052 terminals ... I had to do the code to add support for tty/ascii terminals. the terminal control unit had the ability to dynamically change the line-scanner under program control and cp67 used that to dynamically determine terminal type (between 2741 and 1052) and switch the line-scanner as appropriate. I figured I could do the same when adding in the tty/ascii terminal support (dynamically determine terminal type and appropriately switch between the 2741, 1052, and tty line-scanner). The problem was that while the line-scanner could switch the baud rate didn't change (which was hardwired for each port). For "hard-wired" terminals ... that went into port with appropriately configured baud rate ... things went fine.

Trying to do general dynamic terminal (like using common phone number for all dial-up) ... could get into baud-rate mismatch. This was somewhat motivation behind university to do clone controller; reverse engineer 360 channel interface and build controller channel interface card ... for interdata/3 ... which was programmed to emulate mainframe controller ... but port interface would also dynamically determine baud rate.
https://www.garlic.com/~lynn/submain.html#360pcm

this evolved into a cluster of interdatas ... an interdata/4 (for the controller emulation) and one or more interdata/3s (dedicated for port, line-scanner functions). later interdata was bought up by perkin-elmer and the box sold under the perkin-elmer brand.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

IBM--disposition of clock business

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM--disposition of clock business
Newsgroups: alt.folklore.computers
Date: Fri, 29 Aug 2008 20:11:47 -0400
Eric Sosman <esosman@ieee-dot-org.invalid> writes:
(The bosses were universally unloved, because they were as a class incompetent. Think about it: What kind of upper-level person does Corporate HQ encourage to move *away* from the center of the action? There is such a thing as good management, and I've been lucky enough to have some from time to time, but this batch had been evolutionarily selected for failure. Can you say "Golgafrincham B Ark?")

there was an analogous scenario in the late 70s about people who took (new) full-time "security" positions ... i.e. all the people that were productively contributing to products couldn't be spared ... it was only those that weren't really needed, that were available to fill those new "security" positions.

a slight footnote ... in the period, nearly everything that might be security was normally dealt with as part of standard business critical data processing.

slightly related security issue mentioned
https://www.garlic.com/~lynn/2008m.html#10 Unbelievable Patent for JCL
https://www.garlic.com/~lynn/2008m.html#36 IBM THINK original equipment sign

having the most valuable corporate hdqtrs information on same (timesharing) system with a lot of non-employees/students from area educational institutions.

this reference appears to have been to something similar
https://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

IBM--disposition of clock business

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM--disposition of clock business
Newsgroups: alt.folklore.computers
Date: Sat, 30 Aug 2008 09:43:30 -0400
Eric Sosman <esosman@ieee-dot-org.invalid> writes:
The Peter Principle posits that people are promoted from jobs they can do until they reach jobs they can't. Shall we formulate a Paul Principle saying that when the institution recognizes that someone has Petered out and transfers him from the job he can't do to a new assignment, the new assignment is even further beyond his abilities than the old?

re:
https://www.garlic.com/~lynn/2008m.html#40 IBM--disposition of clock business

in this particular situation ... there were some number of brand new job positions created for this thing called security (which was a staff position and had no direct P&L contribution). the issue was what people (in existing jobs with direct P&L contribution) would be identified as available for such new positions?

the peter principle is that people are promoted until they reach jobs they can't do. there is a slightly different scenario regarding heads roll uphill ... people get promotions for really screwing up ... as a way trying to get rid of them and away from any direct P&L responsibility. the corollary to heads roll uphill ... is that the people that are top of the list for new (non P&L) positions ... are those that are ones that are being unloaded (because they aren't wanted).

in theory, those being unloaded, are the least productive and aren't contributing to the organization success ... the least productive can be threat to organization success ... which can translate into threat to success & security to the individuals in the group.

however, the most productive can also be viewed as threat to other individuals, i.e. there are situations, those (excessive productive members) that may be labeled as non-team players (for contributing significantly more than anybody else in the organization) and are treated similarly badly ...

for additional drift, this recent reference to being told that they could have forgiven for being wrong, but they were never forgive for being right:
https://www.garlic.com/~lynn/2008m.html#30 Taxes

also reference to Boyd's quotes about having to choose between taking credit for doing the work and actually doing the work.

if you really are interested in going a little farther afield ... references to article from a year ago about nothing succeeds like failure
https://www.garlic.com/~lynn/aadsm26.htm#59 On cleaning up the security mess: escaping the self-perpetuating trap of Fraud?
https://www.garlic.com/~lynn/2007h.html#29 sizeof() was: The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007u.html#63 folklore indeed

some number of the scenarios are with regard to (numerous multi-billion dollar) failed dataprocessing modernization projects (in some cases, one failed effort after another for the same project, each one being declared a success before being scraped and moving on to the next one) ... some federal gov, some private. misc. recent references to modernization (for a little x-over into the baudet thread, some number of the following is with reference to various ATC-modernization efforts over the years):
https://www.garlic.com/~lynn/2003l.html#13 Cost of patching "unsustainable"
https://www.garlic.com/~lynn/2003l.html#14 Cost of patching "unsustainable"
https://www.garlic.com/~lynn/2003m.html#13 Cost of patching "unsustainable"
https://www.garlic.com/~lynn/2004l.html#49 "Perfect" or "Provable" security both crypto and non-crypto?
https://www.garlic.com/~lynn/2005.html#37 [OT?] FBI Virtual Case File is even possible?
https://www.garlic.com/~lynn/2005c.html#17 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005e.html#29 Using the Cache to Change the Width of Memory
https://www.garlic.com/~lynn/2005t.html#6 phishing web sites using self-signed certs
https://www.garlic.com/~lynn/2006b.html#37 X.509 and ssh
https://www.garlic.com/~lynn/2006k.html#49 Value of an old IBM PS/2 CL57 SX Laptop
https://www.garlic.com/~lynn/2006o.html#9 Pa Tpk spends $30 million for "Duet" system; but benefits are unknown
https://www.garlic.com/~lynn/2007.html#17 SSL info
https://www.garlic.com/~lynn/2007e.html#52 US Air computers delay psgrs
https://www.garlic.com/~lynn/2007i.html#38 John W. Backus, 82, Fortran developer, dies (Actually, Working under the table!)
https://www.garlic.com/~lynn/2007i.html#42 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007o.html#18 Flying Was: Fission products
https://www.garlic.com/~lynn/2007o.html#23 Outsourcing loosing steam?
https://www.garlic.com/~lynn/2007o.html#43 Flying Was: Fission products
https://www.garlic.com/~lynn/2008h.html#6 The Return of Ada

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

APL

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: APL
Newsgroups: alt.folklore.computers
Date: Sat, 30 Aug 2008 10:01:32 -0400
hancock4 writes:
I could understand how _mathmaticians_ would like the language. But did anyone else like the language?

in the 70s, there was significant use of APL for modeling ... some amount of stuff that is currently done with spreadsheets. However, there was also lots of stuff that was along the lines of rapid development ... similar to stuff currently done with javascript, php, etc.

nearly all of the HONE applications were done in APL (initially cms\apl on cp67, moved to apl\cms on vm370) for corporate world-wide sales & marketing support. recent references:
https://www.garlic.com/~lynn/2008m.html#36 IBM THINK original equipment sign

i've mentioned various times that corporate hdqtrs business modeleling people loaded all the most sensitive company & customer business information on the cambridge system not long after cms\apl was operational.
https://www.garlic.com/~lynn/2008m.html#40 IBM--disposition of clock business

In the 70s, I would do an annual tax return in APL ... basically required lines from the tax return ... input the fields required and output all the calculations for fields that needed to be filled in.

there was also a sophisticated analytical system performance and workload profile model done in apl at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

this was eventually deployed on HONE as the performance predictor ... allowed customer sales support to enter customer configuration and workload profile information and ask "what if" performance questions regarding changes to configuration and/or workload. This and other performance work at the science center evolved into things like capacity planning.

misc. past posts mentioning performance predictor
https://www.garlic.com/~lynn/2001i.html#46 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2002b.html#64 ... the need for a Museum of Computer Software
https://www.garlic.com/~lynn/2002c.html#45 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002q.html#28 Origin of XAUTOLOG (x-post)
https://www.garlic.com/~lynn/2003g.html#15 Disk capacity and backup solutions
https://www.garlic.com/~lynn/2003p.html#29 Sun researchers: Computers do bad math ;)
https://www.garlic.com/~lynn/2004g.html#42 command line switches [Re: [REALLY OT!] Overuse of symbolic constants]
https://www.garlic.com/~lynn/2004k.html#31 capacity planning: art, science or magic?
https://www.garlic.com/~lynn/2004o.html#10 Multi-processor timing issue
https://www.garlic.com/~lynn/2005d.html#1 Self restarting property of RTOS-How it works?
https://www.garlic.com/~lynn/2005d.html#6 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#33 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005d.html#48 Secure design
https://www.garlic.com/~lynn/2005h.html#1 Single System Image questions
https://www.garlic.com/~lynn/2005h.html#15 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005j.html#12 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005k.html#17 More on garbage collection
https://www.garlic.com/~lynn/2005o.html#30 auto reIPL
https://www.garlic.com/~lynn/2005o.html#34 Not enough parallelism in programming
https://www.garlic.com/~lynn/2006b.html#15 {SPAM?} Re: Expanded Storage
https://www.garlic.com/~lynn/2006b.html#17 {SPAM?} Re: Expanded Storage
https://www.garlic.com/~lynn/2006f.html#22 A very basic question
https://www.garlic.com/~lynn/2006f.html#30 A very basic question
https://www.garlic.com/~lynn/2006g.html#34 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#25 The Pankian Metaphor
https://www.garlic.com/~lynn/2006l.html#3 virtual memory
https://www.garlic.com/~lynn/2006o.html#23 Strobe equivalents
https://www.garlic.com/~lynn/2006o.html#25 CPU usage for paging
https://www.garlic.com/~lynn/2006s.html#24 Curiousity: CPU % for COBOL program
https://www.garlic.com/~lynn/2006t.html#28 Why these original FORTRAN quirks?
https://www.garlic.com/~lynn/2007k.html#65 Non-Standard Mainframe Language?
https://www.garlic.com/~lynn/2007r.html#68 High order bit in 31/24 bit address
https://www.garlic.com/~lynn/2007s.html#41 Age of IBM VM
https://www.garlic.com/~lynn/2007u.html#21 Distributed Computing
https://www.garlic.com/~lynn/2008c.html#24 Job ad for z/OS systems programmer trainee
https://www.garlic.com/~lynn/2008l.html#81 Intel: an expensive many-core future is ahead of us

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Baudot code direct to computers?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Baudot code direct to computers?
Newsgroups: alt.folklore.computers
Date: Sat, 30 Aug 2008 10:15:22 -0400
Mensanator <mensanator@aol.com> writes:
Wow! A rotary phone.

re:
https://www.garlic.com/~lynn/2008m.html#38 Baudot code direct to computers?

(internal) corporate business (tie) line ...

in this sample of various "new nodes" added during '83 (part of exceeding 1000 nodes/hosts)
https://www.garlic.com/~lynn/internet.htm#22
https://www.garlic.com/~lynn/99.html#112

the "8-" prefix numbers are all internal corporate telephone network.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

IBM-MAIN longevity

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM-MAIN longevity
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sat, 30 Aug 2008 10:35:13 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
dish was significantly smaller than the 4.5m dishes for tdma system on internal network (working with nearly a decade earlier) ... had started with some telco T1 circuits, some T1 circuits on campus T3 collins digital radio (microwave, multiple locations in south san jose) and some T1 circuits on existing C-band system that used 10m dishes (west cost / east coast). then got to work on design of tdma system with 4.5m dishes for Ku-band system and a transponder on sbs-4 (that went up on 41-d, 5sep84).

re:
https://www.garlic.com/~lynn/2008m.html#19 IBM-MAIN longevity

the RF gear was built to spec by company on the other side of the pacific. there were two sets of digital TDMA gear built to specs ... by two different companies ... one of the digital units at one of the sites:
https://www.garlic.com/~lynn/hsdttdma.jpg

hsdt digital tdma

these had redundant hot-pluggable boards, redundant power supplies, various kind of real-time board fall-over,

misc. posts mentioning various aspects of HSDT (high-speed data transport) project:
https://www.garlic.com/~lynn/subnetwork.html#hsdt

for other drift ... posts regarding business trip related to the RF-gear
https://www.garlic.com/~lynn/94.html#33b High Speed Data Transport (HSDT)
https://www.garlic.com/~lynn/2008e.html#45 1975 movie "Three Days of the Condor" tech stuff
https://www.garlic.com/~lynn/2008h.html#31 VTAM R.I.P. -- SNATAM anyone?
https://www.garlic.com/~lynn/2008i.html#99 We're losing the battle

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

IBM--disposition of clock business

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM--disposition of clock business
Newsgroups: alt.folklore.computers
Date: Sat, 30 Aug 2008 22:09:33 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
some number of the scenarios are with regard to (numerous multi-billion dollar) failed dataprocessing modernization projects (in some cases, one failed effort after another for the same project, each one being declared a success before being scraped and moving on to the next one) ... some federal gov, some private. misc. recent references to modernization (for a little x-over into the baudet thread, some number of the following is with reference to various ATC-modernization efforts over the years):

re:
https://www.garlic.com/~lynn/2008m.html#41 IBM--disposition of clock business

more modernization efforts:

FAA Outage Highlights Need For Modernization
http://www.redorbit.com/news/technology/1538476/faa_outage_highlights_need_for_modernization/index.html

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Baudot code direct to computers?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Baudot code direct to computers?
Newsgroups: alt.folklore.computers
Date: Sun, 31 Aug 2008 10:30:42 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
Trying to do general dynamic terminal (like using common phone number for all dial-up) ... could get into baud-rate mismatch. This was somewhat motivation behind university to do clone controller; reverse engineer 360 channel interface and build controller channel interface card ... for interdata/3 ... which was programmed to emulate mainframe controller ... but port interface would also dynamically determine baud rate.
https://www.garlic.com/~lynn/submain.html#360pcm

this evolved into a cluster of interdatas ... an interdata/4 (for the controller emulation) and one or more interdata/3s (dedicated for port, line-scanner functions). later interdata was bought up by perkin-elmer and the box sold under the perkin-elmer brand.


re:
https://www.garlic.com/~lynn/2008m.html#39 Baudot code direct to computers?

for additional folklore ... two "bugs" I remember during the development.

One involved the (reverse engineered) board that interface to mainframe channel. turns out that channel would acquire the memory bus (for data transfer) ... locking out other use (like processor) ... implying that it be released periodically. Another requirement for the memory bus was the processor timer ... which was accessed at storage location 80 (x'50'). The timer mechanism required updating storage every time it tic'ed. If the timer tic'ed a second time before storage had been updated for the previous tic ... the machine would "red-light" (come to a halt with hardware failure). The 360/67 came standard with high-speed timer feature ... which tic'ed approximately once every 13microseconds. So one of the first "bugs" ... was not periodically releasing the channel interface (so it would release the memory bus interface).

The other "bug" was first getting TTY terminal data into 360 memory ... and finding it all garbage. It turns out in emulating mainframe controller ... we overlooked that the line-scanner in the mainframe controller took the leading bit off the port and put it into the low-order bit position (before getting full byte for transfer to mainframe memory) ... so what appeared in mainframe memory wasn't ascii-byte, it was bit-reversed ascii-byte.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

IBM-MAIN longevity

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM-MAIN longevity
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 31 Aug 2008 15:13:36 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
... and turbo pascal snippet from somewhere long ago and far away:

re:
https://www.garlic.com/~lynn/2008l.html#16 IBM-MAIN longevity
https://www.garlic.com/~lynn/2008l.html#17 IBM-MAIN longevity

for other turbo pascal drift ... thread from two years ago ... thread regading attempting to resurrect old diskettes
https://www.garlic.com/~lynn/2006s.html#35 Turbo C 1.5 (1987)
https://www.garlic.com/~lynn/2006s.html#36 Turbo C 1.5 (1987)
https://www.garlic.com/~lynn/2006s.html#37 Turbo C 1.5 (1987)
https://www.garlic.com/~lynn/2006s.html#56 Turbo C 1.5 (1987)

and list of (some) resurrected diskettes (including turbo pascal):
https://www.garlic.com/~lynn/2006s.html#57 Turbo C 1.5 (1987)

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Blinkylights

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Blinkylights
Newsgroups: alt.folklore.computers
Date: Mon, 01 Sep 2008 13:35:25 -0400
Lars Poulsen <lars@beagle-ears.com> writes:
For each prefix, there is a list of other prefixes that are local to that prefix. The NANPA (North American Number Plan Administration ... used to be part of BellCoRe, but nowadays contracted separately, and I think these days run by SAIC) publishes (for a fee) this database with quarterly updates. In some cases, a prefix exists solely in order to create a rate center for an isolated location even if it has only a few dozen residents. The local-relationship is not always symmetric. West of Santa Barbara is the rural comminity of Gaviota. Local calls from there are the 100 or so neighbors and the prefix that contains the county administration. Probably one of the smallest local rate centers in the nation.

possibly because SAIC bought bellcore ...

SAIC BUYS BELLCORE
http://www.gtld-mou.org/gtld-discuss/mail-archive/00814.html

from above:
SAIC, by buying Bellcore, will become the administrator of the North American Numbering Plan, basically the folks who make up the new area codes for the USA and Canada. They don't issue the actual phone numbers, just the area codes -- analogous to the com net and org domains.

... snip ...

a couple other references ...

Bellcore buy would broaden SAIC's talent base
http://www.fcw.com/print/2_30/news/64182-1.html
SALE OF BELLCORE TO SAIC REPORTED TO BE IMMINENT
http://www.cbronline.com/article_cg.asp?guid=EAD269E0-7A2D-4745-970A-86E6FC4A62EA

BELLCORE use to have a semantic group that we visited in conjunction with our knowledge base work. At some point BELLCORE had also joined MCC (talk about consortium joining a consortium) and some of the semantic people were assigned to do work at MCC (in Austin).

and this reference:

Private Research Giant Is Said to Be in Talks for Bellcore
http://query.nytimes.com/gst/fullpage.html?res=9806E6DB113DF937A1575AC0A960958260&sec=&spon=&pagewanted=all

from above:
Although highly regarded for its work in computer and telephone networking, Bellcore's research role has been complicated in recent years by increasing competition by its Bell-company owners -- Ameritech, Bell Atlantic, BellSouth, Nynex, Pacific Telesis, SBC Communications and U S West -- especially now that mergers are pending between Bell Atlantic and Nynex and between Pacific Telesis and SBC.

... snip ...

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Taxes

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Taxes
Newsgroups: alt.folklore.computers
Date: Mon, 01 Sep 2008 13:46:59 -0400
krw <krw@att.bizzzzzzzzzz> writes:
They can, but won't. Again, there are far too many to piss off. Ever hear of AARP?

one of the web news sites has had a 3part video of economists sitting around table in what looks like a bar (or restaurant). The focus of discussion was move to straight flat rate tax. The agreement between all the economists seemed to be that the current tax infrastructure promotes enormous corruption ... with huge amounts of money at stake by various special interests and lobbyists ... after all sorts of exemptions, provisions, etc. The flat rate claim also stated that the new code would be less than 300-400 pages compared to the current code that runs to something like 60,000 pages (and besides the enormous corruption represents significant windfall income for tax lawyers).

The issue in the discussion wasn't about the current infrastructure contributing to enormous corruption and/or going to flat rate would drastically simplify and eliminate most of the current corruption. The issue seemed to be with all the money involved ... how long would it take for a new, drastically simplified infrastructure to become as corrupt as the existing infrastructure.

One of the economist footnotes in the discussion was that Ireland has attracted quite a bit of businesses because of various kinds of tax provisions for businesses (which has significantly improved their economy) ... and Ireland is against the US moving to a straight-forward simplified tax plan ... since they believe that would eliminate a lot of motivation for companies moving to Ireland.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Taxes

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Taxes
Newsgroups: alt.folklore.computers
Date: Mon, 01 Sep 2008 14:05:01 -0400
re:
https://www.garlic.com/~lynn/2008j.html#65 Taxes
https://www.garlic.com/~lynn/2008j.html#82 Taxes
https://www.garlic.com/~lynn/2008l.html#68 Taxes
https://www.garlic.com/~lynn/2008m.html#25 Taxes
https://www.garlic.com/~lynn/2008m.html#30 Taxes
https://www.garlic.com/~lynn/2008m.html#33 Taxes
https://www.garlic.com/~lynn/2008m.html#49 Taxes

other past discussions about the complexity of the current tax infrastructure (aside from the issue of the enormous corruption that it results in) is that the amount of resources devoted to dealing with the current complex tax infrastructure diverts a measurable part of the country's GNP.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Baudot code direct to computers?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Baudot code direct to computers?
Newsgroups: alt.folklore.computers
Date: Tue, 02 Sep 2008 09:31:47 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
... haven't found a 3101 picture as of yet ...

(ascii) cdi miniterm 77-79 (compact microfiche viewer to the left)
https://www.garlic.com/~lynn/miniterm.jpg

home miniterm



later same desk with PC
https://www.garlic.com/~lynn/homepc.jpg

home pc



re:
https://www.garlic.com/~lynn/2008m.html#38 Baudot code direct to computers?

as previously noted, phones were business tieline (easy world-wide connectivity from home), 77-79 tieline phone was old rotory

found picture of 3101 on same desk (as in above pictures) used in period between the miniterm and the pc) ... cropped to remove people in the picture
https://www.garlic.com/~lynn/home3101.jpg

home 3101

footnote on the noise reduction board covering the diskette drive space in the PC case ... the board has sheet of 1/4in dense foam glued to the back.

earlier post in thread with several past 3101/topaz references:
https://www.garlic.com/~lynn/2008m.html#37 Baudot code direct to computers?

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Are family businesses unfair competition?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Are family businesses unfair competition?
Date: September 1st, 2008 at 4:00 pm
Blog: Boyd
there is frequently several ways to spin the news ... another way (if people are feeling the gas crunch and are buying fewer large vehicles) ... is that Toyota is rapidly adapting its product mix to match the consumer demand (I've posted before about having participated in some of the US auto industry C4 activity about becoming more agile and adapting to consumer demand).

there has been quite a bit about wall-street orientation having created a 3 month time-horizon in American business (which tends to stifle any sort of investment).

Boyd also noted (in briefings) that in the late 70s and early 80s there was a shift in American business orientation as newer generation (that had been young Army officers in WW2) moved up the corporate ladder and were becoming top executives. He explained that US entry into WW2 required massive deployment of rapidly trained soldiers with little or no experience. To meet the situation, an infrastructure of rigid, top-down command&control was created to leverage the little experience that was available (assuming the vast majority of the individuals had no idea what they were doing). The orientation was that only a few people at the very top knew what they were doing and nobody else ... requiring the rigid, top-down command&control infrastructure and was starting to permeate corporate life.

The 90s saw a little retrenchment of the massive staff organizations with "downsizing". However, the assumption that the majority of the people in a large organization really don't know what they were doing, has complimented the 3-month horizon orientation that avoids making decisions about infrastructure (or people) investment.

I've conjectured that at least part of the OODA-loop motivation was being able to highlight and contrast with the rigid, top-down, command&control orientation (that had arisen out of necessity at the beginning of WW2).

past Boyd posts
https://www.garlic.com/~lynn/subboyd.html#boyd

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Are family businesses unfair competition?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Are family businesses unfair competition?
Date: September 1st, 2008 at 10:31 pm
Blog: Boyd
re:
https://www.garlic.com/~lynn/2008m.html#52 Are family businesses unfair competition?

The internal network was larger than the arpanet/internet from just about the beginning until possible sometime mid-85. This used technology that wasn't typical of the standard corporate networking products. In fact, at one point, one of the supreme networking experts from corporate hdqtrs came for a briefing on the internal network. After hearing how it allowed for decentralized operation ... he claimed that it couldn't exist. It had been established that the effort to implement such distributed operation would require thousands of people years of effort ... and there had been no record of such a massive project ... past posts mentioning internal network:
https://www.garlic.com/~lynn/subnetwork.html#internalnet

There has been some recent news articles that the current (US) ratio of ceo compensation to worker compensation is now 400:1 ... up significantly from having been 20:1 ... and compared to ratio of 10:1 in most of the rest of the world. Boyd's scenario of rigid, top-down command&control system predicated on assumption that the top executives are the only skilled individuals in the organization (and everybody else is unskilled) would tend to support the compensation ratio (it isn't just the rigid command&control structure ... it is also the related assumption that there are no other skilled individuals in the organization).

past posts mentioning the 400:1 ratio
https://www.garlic.com/~lynn/2008i.html#73 Should The CEO Have the Lowest Pay In Senior Management?
https://www.garlic.com/~lynn/2008j.html#24 To: Graymouse -- Ireland and the EU, What in the H... is all this about?
https://www.garlic.com/~lynn/2008j.html#76 lack of information accuracy
https://www.garlic.com/~lynn/2008k.html#71 Cormpany sponsored insurance
https://www.garlic.com/~lynn/2008m.html#25 Taxes
https://www.garlic.com/~lynn/2008m.html#33 Taxes

past Boyd posts:
https://www.garlic.com/~lynn/subboyd.html#boyd

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Blinkylights

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Blinkylights
Newsgroups: alt.folklore.computers
Date: Tue, 02 Sep 2008 14:43:54 -0400
Lars Poulsen <lars@beagle-ears.com> writes:
In the last couple of days, I have been thrown back to the world of multi-tasking hangs and deadlocks, since my Firefox-2 decided to upgrade itself to Firefox-3 and retire.

all the firefox releases that you could possibly want ftp://ftp.mozilla.org/pub/mozilla.org/firefox/releases/

in browser, go to edit, preferences, advanced, update ... and turn off automatic updating

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

With all the highly publicised data breeches and losses, are we all wasting our time?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: With all the highly publicised data breeches and losses, are we all wasting our time?
Date: September 3rd, 2008
Blog: Information Security Network
we were called in to help word-smith cal. (and later federal) electronic signature legislation.
https://www.garlic.com/~lynn/subpubkey.html#signature

some of the groups involved were also heavily involved in privacy issues and had done in-depth consumer surveys on the privacy subject and found the two most important issues were:
1) identity theft 2) denial of service (by institutions using personal information)

there was evidence of connection between identity theft and data breaches ... but it seemed that little or no effort was being directed towards dealing with data breaches. this seemed to be the motivation behind the data breach notification legislation (hoping that the resulting publicity would increase data breach countermeasures, resulting in moderation in amount of identity theft)

note that about a year ago ... there were some articles and threads about "nothing succeeds like failure" ... referring to various security related issues (constant requirement for flow of new security features because the fundamental problems are never adequately addressed).

we had looked at this in the mid-90s in conjunction with the X9A10 financial standard working group, which had been given the requirement to preserve the integrity of the financial infrastructure for all retail payments. One of the biggest identity theft from data breaches was subcategory of account fraud (i.e. using information from the data breaches to perform fraudulent financial transactions). The resulting x9.59 financial standard didn't do anything to address data breaches ... however, part of the x9.59 financial standard was to eliminate the ability of crooks to use information from data breaches to perform fraudulent transactions (i.e. the breaches still might happen, but the majority of bad consequences that could result from such data breaches were eliminated). misc. x9.59 references
https://www.garlic.com/~lynn/x959.html#x959

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

With all the highly publicised data breeches and losses, are we all wasting our time?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: With all the highly publicised data breeches and losses, are we all wasting our time?
Date: September 3rd, 2008
Blog: Information Security Network
re:
https://www.garlic.com/~lynn/2008m.html#55 With all the highly publicised data breeches and losses, are we all wasting our time?

Note that part of the x9.59 analysis was with regard to security proportional to risk ... in the account fraud flavor of identity theft related to data breaches (which has been the majority of all breaches) .... the typical breached information is to be protected by some commercial operation where the value of the information is proportional to the profit that the commercial entity makes off related transactions.

However, the value of the breached information to the crooks is the balance/credit-limit of the related accounts .... which can make the information worth 100 times more to the attacking crooks than the defending merchants. The factor of 100 times (or more) mismatch between defender and attacker ... means that the attacker will frequently prevail (attacker can potentially afford to outspend the defender by a factor of 100 or more). Part of the X9.59 financial standard was to change the paradigm and eliminate the value of the information to the attacker (rather than ever increasing amounts of security).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

"Engine" in Z/OS?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: "Engine" in Z/OS?
Newsgroups: alt.folklore.computers
Date: Wed, 03 Sep 2008 17:05:55 -0400
hancock4 writes:
In an ad for their new release of Z/OS, IBM said, "Also, up 64 _engines_ and 1.5 TB per server of real memory per LPAR will ship in this release, as well as support for large pages".

Could someone translate that into 1975 mainframe talk?

I know that an LPAR is a logical partition, basically an 'independent' computer. But what is an "engine"? What is the 'server' as used in this context?

Are they saying you can get 1.5 terabytes (trillion bytes) of real (not virtual) memory in an LPAR? Isn't that an enormous amount of 'core'? I could see a mainframe today handling gigabytes of real memory, but into terabytes?

[mega = million, giga = billion, tera = trillion ?]

For application programs, I take it they have special assembler (user state, not supv state) instructions for handling large amounts of memory? Is that the "V" vector series of instructions?


Z10 Enterprise (feb/mar, 2008)
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=AN&subtype=CA&htmlfid=897/ENUS108-154&appname=USN

from above:
The IBM System z10 EC is a world-class enterprise server designed to meet your business needs. The System z10 EC is built on the inherent strengths of the IBM System z platform and is designed to deliver new technologies and virtualization that provide improvements in price/performance for key new workloads.

... snip ...

implies enterprise server effectively is a computer.

also from above reference:
Up to 1.5 terabytes of available real memory per server for growing application needs (with up to 1 TB real memory per LPAR).

... snip ...

which seems to imply that a z10 computer/server can be configured with up to 1.5 terabytes of real memory ... and a LPAR can be configured with up to 1 terabyte of real memory).

and

z/OS V1.10 (with some z10 EC) from August
http://www-01.ibm.com/common/ssi/index.wss?DocURL=http://www-01.ibm.com/common/ssi/rep_ca/6/897/ENUS208-186/index.html&InfoType=AN&InfoSubType=CA&InfoDesc=Announcement+Letters&panelurl=&paneltext=

from above:
Application and data serving scalability. Up to 64 engines, up to 1.5 TB per server with up to 1.0 TB of real memory per LPAR, and support for large (1 MB) pages on the z10 EC can help provide scale and performance for your critical workloads.

... snip ...

note 370 virtual memory had options for 2k and 4k virtual page support ... and 64k virtual segments and 1mbyte virtual segments. The 2k virtual page support and 64k virtual segments were dropped quite some time ago. 1mbyte virtual pages can be somewhat be considered a 1mbyte virtual segment contiguously allocated in real storage.

LPAR is subset of virtual machine function ... supported in hardware (in some sense the virtual machine microcode performance assists evolved to the point that they could also be used to create LPARs). LPARs can be a fraction of an processor/engine, fraction of multiple processors/engines, several processors/engines, etc.

the basis for LPARs is the PR/SM function ... originally done on 3090 ... appeared to be in response to Amdahl's hypervisor support.

There are "central processors" .. CP engines ... but there are also IFL (integrated facility for linux) engines (a CP dedicated to linux workload ... distinction appears to allow different software license pricing for IFLs from CPs).
http://www-03.ibm.com/systems/z/os/linux/solutions/ifl.html

... and configuration may have application assist processors.
http://www.redbooks.ibm.com/redbooks.nsf/e9abd4a2a3406a7f852569de005c909f/458181414aa9c6cc85256ec500585946?OpenDocument

one of the functions that LPAR doesn't do is "virtual paging" ... so there is requirement that each LPAR page has to be mapped to real page.

for another view ... this is old writeup regarding processor utilization in a LPAR envvironment.
http://www.vm.ibm.com/perf/tips/lparinfo.html

another kind of discussion regarding LPAR processor capacity
http://www-01.ibm.com/support/docview.wss?uid=tss1td103411

z/Architecture (z10) principles of operation and z/Architecture reference summary from February
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/XKS/DZ9ZBK08

note that z/Architecture has 64bit addressing support. mid-70s, 370s offered 24-bit virtual & real addressing. "XA" for 3081 introduced 31-bit virtual addressing. z/Architecture introduces 64-bit addressing mode (i.e. instructions can address up to 64-bits of storage).

the principles of operation and reference summary goes into quite a bit of detail about instruction 64-bit addressing and operation.

I've posted numerous times in the past that by mid-70s, processing power (and real storage sizes) were increasing much fast than disk thruput was increasing ... and as a result ... systems were increasingly taking advantage of real storage to compensate for disk thruput bottlenecks.

I had gotten into some amount of trouble in the early 80s by claiming that relative system disk thrput had declined by a order of magnitude between (cp67) 360/67 and (vm370) 3081 ... disk division executives assigned the performance group to refute the statement ... but they came back after a couple weeks to say that I had actually slightly understated the problem ... recent post with reference
https://www.garlic.com/~lynn/2008j.html#1 OS X Finder windows vs terminal window wierdness

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Blinkylights

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Blinkylights
Newsgroups: alt.folklore.computers
Date: Thu, 04 Sep 2008 10:06:39 -0400
jmfbahciv <jmfbahciv@aol> writes:
The architects of the KL started the VAX architecture after they were done with the KL. [Ah. Now I remember one name...Jud Leonard]

Most people who worked at DEC (not Digital) moved from one product line to another.


and when they shutdown the vm370 development group out in burlington mall and tried to move everybody to POK ... some number of the people that wouldn't move, went to work for DEC ... joke that head of POK contributed significantly to VAX.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

CHROME and WEB apps on Mainframe?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CHROME and WEB apps on Mainframe?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 04 Sep 2008 10:41:27 -0400
timothy.sipples@US.IBM.COM (Timothy Sipples) writes:
I cannot think of any earthly (or even unearthly) reason why Google Chrome would not already be 100% compatible with all the HTTP(S) implementations available on System z. It's a ubiquitous protocol, and HTTP has been running on mainframes longer than on any other system with the sole exception of the defunct NeXT operating system.

note that the first web server outside of europe was on the slac vm system:
https://ahro.slac.stanford.edu/wwwslac-exhibit

other history about HTML at CERN tracing back to sgml & the cms script clone done by waterloo
https://www.garlic.com/~lynn/2008j.html#86 CLIs and GUIs
https://www.garlic.com/~lynn/2008k.html#3 CLIs and GUIs
and
http://infomesh.net/html/history/early/

I've commented before that CERN had presented a paper at SHARE circa '74 comparing TSO and CMS ... and copies inside the corporation were classified "need to know only" (not wanting to unnecessarily expose employees to how bad TSO was).

the science center had been responsible for virtual machines, cp40, cp67, etc
https://www.garlic.com/~lynn/subtopic.html#545tech

GML was also invented at the science center in 69 ... and support for GML markup was added to the cms script command. gml eventually was standardized as sgml
https://www.garlic.com/~lynn/submain.html#sgml

science center was also responsible for technology used by most of the internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

and was larger than the arpanet/internet from just about the beginning until possibly mid-85. this didn't use standard products from the communication product group (and was something of contention with that organization) ... recent references to some of that
https://www.garlic.com/~lynn/2008i.html#99 We're losing the battle
https://www.garlic.com/~lynn/2008m.html#20 IBM-MAIN longevity

The vm slac web server predates SLAC bringing it up on NeXT platform.

much of NeXT was picked up from Mach microkernel at CMU (various pieces of andrew related activity) and can be traced forward to the current apple system.

there is joke about the corporation paying for some of the andrew technology three times over. the corporation and dec equally funded project athena for $25m each ... but in that time-frame, the corporation funded cmu andrew activity for $50m. There were (at least) some sabaticals of CMU professors at san jose research (possibly yorktown also).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

CHROME and WEB apps on Mainframe?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CHROME and WEB apps on Mainframe?
Newsgroups: alt.folklore.computers
Date: Thu, 04 Sep 2008 11:08:44 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
science center was also responsible for the internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

and was larger than the arpanet/internet from just about the beginning until possibly mid-85. this didn't use standard products from the communication product group (and was something of contention with that organization) ... recent references to some of that
https://www.garlic.com/~lynn/2008i.html#99 We're losing the battle
https://www.garlic.com/~lynn/2008m.html#20 IBM-MAIN longevity

The vm slac web server predates SLAC bringing it up on NeXT platform.

much of NeXT was picked up from Mach microkernel at CMU (various pieces of andrew related activity) and can be traced forward to the current apple system.

there is joke about the corporation paying for some of the andrew technology three times over. the corporation and dec equally funded project athena for $25m each ... but in that time-frame, the corporation funded cmu andrew activity for $50m. There were (at least) some sabaticals of CMU professors at san jose research (possibly yorktown also).


re:
https://www.garlic.com/~lynn/2008m.html#59 CHROME and WEB apps on Mainframe?

one of the other responses in the thread (regarding NeXT continuing in current apple system) listed this reference:
http://lowendmac.com/orchard/05/next-computer-history.html

from above:
Most analysts expected Apple to acquire or license BeOS from Gassée's Be Inc. and quickly release an Apple branded version (BeOS was already available for Power Macs and had a Mac-like interface). Be apparently demanded too much money, and Apple decided to look elsewhere.

... snip ...

other past posts mentioning mach, next, and apple:
https://www.garlic.com/~lynn/2001b.html#14 IBM's announcement on RVAs
https://www.garlic.com/~lynn/2003.html#46 Horror stories: high system call overhead
https://www.garlic.com/~lynn/2004c.html#53 defination of terms: "Application Server" vs. "Transaction Server"
https://www.garlic.com/~lynn/2005j.html#26 IBM Plugs Big Iron to the College Crowd
https://www.garlic.com/~lynn/2005q.html#49 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005r.html#43 Numa-Q Information
https://www.garlic.com/~lynn/2006b.html#8 Free to good home: IBM RT UNIX
https://www.garlic.com/~lynn/2006l.html#43 One or two CPUs - the pros & cons
https://www.garlic.com/~lynn/2006m.html#14 The AN/FSQ-31 Did Exist?!
https://www.garlic.com/~lynn/2007i.html#26 Latest Principles of Operation
https://www.garlic.com/~lynn/2007q.html#26 Does software life begin at 40? IBM updates IMS database
https://www.garlic.com/~lynn/2007v.html#65 folklore indeed
https://www.garlic.com/~lynn/2008b.html#20 folklore indeed
https://www.garlic.com/~lynn/2008b.html#22 folklore indeed
https://www.garlic.com/~lynn/2008c.html#53 Migration from Mainframe to othre platforms - the othe bell?

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

CHROME and WEB apps on Mainframe?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CHROME and WEB apps on Mainframe?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 04 Sep 2008 13:30:34 -0400
Efinnell15@AOL.COM (Ed Finnell) writes:
Some of the ex-IBMers at SHARE claimed a CMS for MVS back in mid-eighties that was 'shelved' for 'market considerations' whatever that is. Shortly after a RATSO(really advanced) project was started but didn't get much traction with IBM. Don't know if was politics or personalities. I'm told it's in same the bin as the IBM internal clist compiler(CLIQ).

re:
https://www.garlic.com/~lynn/2008m.html#59 CHROME and WEB apps on Mainframe?
https://www.garlic.com/~lynn/2008m.html#60 CHROME and WEB apps on Mainframe?

in the mid-70s there was an small advanced technology symposium held in POK ... the 801 group was there (i.e. risc, precursor to romp, rios, current power, etc) and we were there with a 16-way 370 smp (which was going fine until it was realized that it was going to take MVS decades to come up with 16-way smp support).

there wasn't another such symposium until one i hosted in the early 80s ... reference
https://www.garlic.com/~lynn/96.html#4a

which included a presentation on running CMS applications under MVS.

for other trivia ... the person responsible for APPN reported to the person doing the CMS under MVS presentation. We used to rib him on stop wasting his time on APPN and work on real networking.

note that a lot of applications running under CMS were ported over from os/360 ... made possible by a little under 64k bytes of code that emulated os/360 system services. there was even a joke that the price/performance of the <64kbyte os simulation in cms was significantly better than the 8mbyte os simulation in mvs.

somewhat in the same category as the internal clist compiler was some enhancements that better than doubled the capability of the cms os/360 system services emulation.

a lot of the cms market segment was thruput and performance sensitive ... and while the "CMS under MVS" supported functional execution ... it didn't address the customer market requirements.

I've mentioned before that the world-wide internal sales & marketing HONE system ... originally started out after the 23jun69 unbundling announcement
https://www.garlic.com/~lynn/submain.html#unbundle

... to provide virtual machines for branch office SEs to keep up their skills running (other) operating systems ... but CMS\APL based applications supporting sales & marketing came to dominate all use (it wasn't too long before a mainframe order could only be processed after it had been first run thru a HONE application). Thru much of the early 80s, there were several repeated significant (failed) efforts to migrate HONE from vm370 to MVS. Part of the issue was HONE was part of the marketing group and every 18 months or so would cycle a new executive (typically having come from a branch manager's job). They could come in as the new HONE chief executive and find themselves extremely mortified to learn that HONE was a vm370-based system and figure they would make their name in the corporation by being responsible for the HONE migration to MVS. After 9-12 months of extensive effort, it would quietly fail ... the executive would eventually be replaced and it would start all over again. misc. past posts mentioning HONE.
https://www.garlic.com/~lynn/subtopic.html#hone

somewhat related, I had proposed an advanced technology new kernel project (i.e. which was the basic theme for the symposium I was sponsoring). my approach was to do a rapid prototyping effort (with a very small core of people) ... but somehow it got picked up as a strategic effort and got totally out of control ... at one point a couple hundred people just writing specifications. It eventually imploded under its own weight ... somewhat the way i had characterized the future system effort:
https://www.garlic.com/~lynn/submain.html#futuresys

in the strategic flavor of the new kernel effort ... part of the justification was about the significant (duplicated) cost of just doing device support and recovery software in the corporation's multiple operating systems. The "new" kernel would be common to all operating systems ... sharing common device support and recovery.

slightly related ... they would let me play disk engineer in disk engineering lab (blg. 14) and disk product test lab (blg. 15). One of the things that they had tried was doing engineering development testing under MVS (in part so they could test multiple devices concurrently instead of the stand-alone dedicated time process they had been using). Unfortunately ... they found that MVS MTBF in that environment was on the order of 15 minutes (i.e. something requiring an MVS re-ipl/reboot). One of the things that I had done for them was work on a i/o supervisor rewrite that would allow multiple devices to be testing concurrently in an operating system environment (and wouldn't fail). I gave an (internal company only) presentation on the results (significantly improved disk engineering development productivity) ... which resulted in taking a lot of heat from the POK RAS manager. misc. past posts mentioning getting to play disk engineer
https://www.garlic.com/~lynn/subtopic.html#disk

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

CHROME and WEB apps on Mainframe?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CHROME and WEB apps on Mainframe?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 04 Sep 2008 16:01:10 -0400
martin_packer@UK.IBM.COM (Martin Packer) writes:
Personally I think we should have put a lot more effort into TSO and ISPF. That's just a gut feeling... I can't give specifics. It's just knowing how good CMS is that makes me feel that way about TSO. (And given that a lot of my TSO usage is actually inside Batch I could see my Batch programming benefiting also.)

re:
https://www.garlic.com/~lynn/2008m.html#59 CHROME and WEB apps on Mainframe?
https://www.garlic.com/~lynn/2008m.html#60 CHROME and WEB apps on Mainframe?
https://www.garlic.com/~lynn/2008m.html#61 CHROME and WEB apps on Mainframe?

parts of the company had a really, really hard time adapting to the 23jun69 unbundling announcement and starting to charge for application software ... misc. past posts mentioning 23jun69 unbundling announcement
https://www.garlic.com/~lynn/submain.html#unbundle

including each organization's expense/run-rate had to be covered by their products' income. some organizations ... like the one that ISPF was part of .... continued to have ongoing problems dealing with the situation for a very long time.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

CHROME and WEB apps on Mainframe?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CHROME and WEB apps on Mainframe?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 04 Sep 2008 17:44:02 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
somewhat related, I had proposed an advanced technology new kernel project (i.e. which was the basic theme for the symposium I was sponsoring). my approach was to do a rapid prototyping effort (with a very small core of people) ... but somehow it got picked up as a strategic effort and got totally out of control ... at one point a couple hundred people just writing specifications. It eventually imploded under its own weight ... somewhat the way i had characterized the future system effort:
https://www.garlic.com/~lynn/submain.html#futuresys

in the strategic flavor of the new kernel effort ... part of the justification was about the significant (duplicated) cost of just doing device support and recovery software in the corporation's multiple operating systems. The "new" kernel would be common to all operating systems ... sharing common device support and recovery.


re:
https://www.garlic.com/~lynn/2008m.html#59 CHROME and WEB apps on Mainframe?
https://www.garlic.com/~lynn/2008m.html#60 CHROME and WEB apps on Mainframe?
https://www.garlic.com/~lynn/2008m.html#61 CHROME and WEB apps on Mainframe?
https://www.garlic.com/~lynn/2008m.html#62 CHROME and WEB apps on Mainframe?
and
https://www.garlic.com/~lynn/96.html#4a

possibly anticipating current generation product designation ... recent reference to current products:
https://www.garlic.com/~lynn/2008m.html#57 "Engine" in Z/OS?

an early meeting for the new kernel project was being held in function room at some hudson valley plant site cafeteria ... somebody in the VM group had arraigned for the room reservation ... and a cafeteria staff had posted a sign saying (misheard the request): ZM meeting ... from then on the effort was frequently referred to as ZM ... an old email reference:
https://www.garlic.com/~lynn/2007h.html#email830527

>other old post refs:
https://www.garlic.com/~lynn/2001.html#27 VM/SP sites that allow free access?
https://www.garlic.com/~lynn/2001l.html#25 mainframe question
https://www.garlic.com/~lynn/2001m.html#53 TSS/360
https://www.garlic.com/~lynn/2001n.html#46 Blinking lights
https://www.garlic.com/~lynn/2002l.html#14 Z/OS--anything new?

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

CHROME and WEB apps on Mainframe?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CHROME and WEB apps on Mainframe?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 05 Sep 2008 07:45:58 -0400
timothy.sipples@US.IBM.COM (Timothy Sipples) writes:
>note that the first web server outside of europe was on the >slac vm system

True, although it's also true it was the first Web server anywhere outside CERN and anywhere outside Switzerland. So it's an even more impressive bit of history. Apparently someone(s) from SLAC attended a conference at CERN, saw this nascient Web technology, thought it was pretty cool, went home and made it work on their VM system rather quickly. This significantly predates the (important) work done at NCSA.

Also true that the very first dynamic (interactive, non-static content) Web application exposed a find (search) interface to VM, also done at SLAC. This was way before even the now quaint CGI technologies.


re:
https://www.garlic.com/~lynn/2008m.html#59 CHROME and WEB apps on Mainframe?
https://www.garlic.com/~lynn/2008m.html#60 CHROME and WEB apps on Mainframe?
https://www.garlic.com/~lynn/2008m.html#61 CHROME and WEB apps on Mainframe?
https://www.garlic.com/~lynn/2008m.html#62 CHROME and WEB apps on Mainframe?
https://www.garlic.com/~lynn/2008m.html#63 CHROME and WEB apps on Mainframe?
and
https://ahro.slac.stanford.edu/wwwslac-exhibit

"SLAC" & "CERN" have been "sister" high-energy physic lab locations ... and would regularly exchange/use a lot of common software. SLAC also hosted the monthly vm baybunch meetings starting in the 70s.

there was lots of VM activity going on in the valley during the period.

one of the large vm370-based timesharing commercial bureaus
https://www.garlic.com/~lynn/submain.html#timeshare

had its main datacenter not very far from SLAC. That organization had also hosted the SHARE online computer conferencing (VMSHARE) since aug76
http://vm.marist.edu/~vmshare/

this predated (vm-based) bitnet and listserv computer conferencing.
https://www.garlic.com/~lynn/subnetwork.html#bitnet

It also predated what I started doing in the late 70s ... I got blamed for online computer conferencing on the internal (vast majority vm-based) network (which was larger than the arpanet/internet from just about the beginning until possibly mid-85)
https://www.garlic.com/~lynn/subnetwork.html#internalnet

in the mid-80s, the "HSDT" project
https://www.garlic.com/~lynn/subnetwork.html#hsdt

had a lot of interaction with NSF and various related organizations (including NCSA) on high-speed networking ... misc. past emails
https://www.garlic.com/~lynn/lhwemail.html#nsfnet
and past posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

CHROME and WEB apps on Mainframe?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CHROME and WEB apps on Mainframe?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 05 Sep 2008 10:54:58 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
one of the large vm370-based timesharing commercial bureaus
https://www.garlic.com/~lynn/submain.html#timeshare

had its main datacenter not very far from SLAC. That organization had also hosted the SHARE online computer conferencing (VMSHARE) since aug76
http://vm.marist.edu/~vmshare/

this predated (vm-based) bitnet and listserv computer conferencing.
https://www.garlic.com/~lynn/subnetwork.html#bitnet


re:
https://www.garlic.com/~lynn/2008m.html#64 CHROME and WEB apps on Mainframe?

One of the procedures that I managed to get setup was monthly distribution copy of all the VMSHARE files and made them available on several internal machines ... including the world-wide sales&marketing support (vm-based) HONE system
https://www.garlic.com/~lynn/subtopic.html#hone

some past email referencing VMSHARE conferencing, VMSHARE files &/or VMSHARE files on HONE:
https://www.garlic.com/~lynn/2006y.html#email780405b
https://www.garlic.com/~lynn/2002e.html#email800310
https://www.garlic.com/~lynn/2006b.html#email800310
https://www.garlic.com/~lynn/2006v.html#email800310
https://www.garlic.com/~lynn/2006v.html#email800310b
https://www.garlic.com/~lynn/2004o.html#email800318
https://www.garlic.com/~lynn/2004o.html#email800329
https://www.garlic.com/~lynn/2006w.html#email800329
https://www.garlic.com/~lynn/2007b.html#email800329
https://www.garlic.com/~lynn/2006v.html#email800331
https://www.garlic.com/~lynn/2007.html#email800331
https://www.garlic.com/~lynn/2006v.html#email800401
https://www.garlic.com/~lynn/2006w.html#email800401
https://www.garlic.com/~lynn/2006v.html#email800409
https://www.garlic.com/~lynn/2007.html#email811225
https://www.garlic.com/~lynn/2007d.html#email820714
https://www.garlic.com/~lynn/2007b.html#email821217
https://www.garlic.com/~lynn/2007b.html#email830227
https://www.garlic.com/~lynn/2006v.html#email841219
https://www.garlic.com/~lynn/2006w.html#email850906
https://www.garlic.com/~lynn/2006w.html#email850906b
https://www.garlic.com/~lynn/2007b.html#email860111

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

With all the highly publicised data breeches and losses, are we all wasting our time?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: With all the highly publicised data breeches and losses, are we all wasting our time?
Date: September 6th, 2008
Blog: Information Security Network
re:
https://www.garlic.com/~lynn/2008m.html#55 With all the highly publicised data breeches and losses, are we all wasting our time?
https://www.garlic.com/~lynn/2008m.html#56 With all the highly publicised data breeches and losses, are we all wasting our time?

as an aside ... the (initially cal. state) data breach notification was to address the number one issue from the in-depth consumer privacy studies. as part of the number two issue from the in-depth consumer privacy studies (institutions and govs denial of service ... based on personal information) ... there was also legislation drafted restricting (personal) data sharing to opt-in only. this legislation has yet to pass ... but the subsequent opt-out data-sharing provisions in (federal) GLB act is similar. Other related legislation is HIPA act and in the EU ... the data privacy directive.

While X9.59 financial standard had some flavor of addressing data breaches ... not by specification of processes to protect the data ... but by (paradigm change) eliminating the usefulness of much of the data for fraudulent purposes ... there has been the X9.99 financial privacy standard (which I helped co-author) that attempts to take into account GLBA and HIPAA (like where personal medical related information might be included and/or inferred from financial statements) ... as well as EU-DPD.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Is Virtualisation a Fad?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Is Virtualisation a Fad?
Date: September 6th, 2008
Blog: Computers and Software
There was the "fad" that happened in the 90s, the plummenting cost of hardware (in the 90s) resulted in the approach that it was easier/simpler to give separate hardware box for each feature, function, application ... than it was to figure out how to get multiple things to co-exist on the same hardware.

This led to dramatically increase in the amount of hardware (that is now frequently significantly under-utilized) .... it is effectively trading off hardware costs against the high level of skill and expertise that has been needed to get multiple things to co-exist on common hardware.

The current scenario with change in trade-off is that there is not only the direct hardware costs ... but also the power costs and management overhead of managing the large proliferation in the number of physical boxes ... while the use of virtualization has drastically reduced the skill level and expertise to get a lot of different applications to co-exist on the same hardware.

Server consolidation is now frequently seeing a factor of ten times reduction in number of boxes ... along with similar reduction in power consumption and management/administrative overhead for the physical boxes (including claims of being able to reduce 300 datacenters to 30 datacenters; sever consolidation also becomes a "green" theme) .... while the leveraging of virtualization technology accomplishes the consolidation with only nominal increase in skills and expertise.

There has also been work on leveraging virtualization technology for things called virtual appliances. This is taking the current level of software complexity and futher partitioning it into smaller pieces (reducing complexity and improving security). There has been various articles written that virtual appliance trend is the death knell for some existing operating systems.

Earlier flavor of virtual appliances was referred to as server virtual machines dating back to cp67 virtual machine operating system for 360/67 (predating vm370)

similar question thread archived here:
https://www.garlic.com/~lynn/2008h.html#45 How can companies decrease power consumption of their IT infrastructure?

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

New technology trends?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: New technology trends?
Date: September 6th, 2008
Blog: Information Security
Recent trend in virtualization has been used for server consolidation ... as both hardware and management cost ... but also for "green", reducing power and cooling loads.

However, in the area of information security, virtualization is also being used for things like virtual appliances ... basically further decomposing existing operating system environments into much simpler components ... with the reduction in complexity and partitioning being able to significantly improve security.

recent reference in this archived answer
https://www.garlic.com/~lynn/2008m.html#67 Is Virtualization a Fad?

and reference to quite a bit earlier use (I admit to working on the product while an undergraduate ... but not actually being aware of this particular use at the time)
https://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

and this medical security information related topic ... archived here:
https://www.garlic.com/~lynn/2008m.html#66 With all the highly publicised data breeches and losses, are we all wasting our time?

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Speculation ONLY

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Speculation ONLY
Newsgroups: alt.folklore.computers
Date: Sat, 06 Sep 2008 09:33:51 -0400
jmfbahciv <jmfbahciv@aol> writes:
Could there be some way to define a "distance" from each datum to seven others so that large amounts of diverse data could be indexed within the whole of all data collected for all subjects?

there has been published studies related to "cognitive" distance ... in the function of the brain and things like Alzheimer.

there was also an study of patents classification ... there has been speculation that patents have been purposefully mis-classified ... possibly with the motivation of contributing to future litigation. the study used clustering analysis of the contents of patents and found something like 30percent of dataprocessing software/hardware found in other categories ... recent mention:
https://www.garlic.com/~lynn/2008m.html#9 Unbelievable Patent for JCL

for additional drift ... long ago and far away ... cluster analysis was used for program execution analysis and re-organization for optimized operation in virtual memory (paged) environment (I coded some of the data collection mechanisms). This was used in the 70s by some number of the os/360 product groups as part of transition from real storage environment to virtual memory environment. It was released as product in 1976 as VS/Repack.

recent past posts mentioning vs/repack:
https://www.garlic.com/~lynn/2008c.html#24 Job ad for z/OS systems programmer trainee
https://www.garlic.com/~lynn/2008c.html#78 CPU time differences for the same job
https://www.garlic.com/~lynn/2008d.html#35 Interesting Mainframe Article: 5 Myths Exposed
https://www.garlic.com/~lynn/2008e.html#16 Kernels
https://www.garlic.com/~lynn/2008f.html#36 Object-relational impedence
https://www.garlic.com/~lynn/2008l.html#81 Intel: an expensive many-core future is ahead of us

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Why SSNs Are Not Appropriate for Authentication and when, where and why should you offer/use it?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Why SSNs Are Not Appropriate for Authentication and when, where and why should you offer/use it?
Date: September 6th, 2008
Blog: Information Security
It is much the same way that account numbers are not appropriate for authentication of financial transactions. Both are required in numerous & various business processes.

This has led to diametrically opposing business requirements .... on one hand, they need to be readily available (for the numerous associated business processes) and on the other hand they have to be kept confidential and never divulged.

We had been called in to help word smith the cal. state (and later federal) electronic signature legislation. Some of the groups involved were also involved were also active in privacy activities and had done extensive, in-depth consumer studies of the subject. They found that the two most important (privacy) issues were

1) identity theft (and especially the subcategory account theft) 2) (gov/institutional) denial of service (based on personal information)

There was some evidence of relationship between data breaches and identity theft ... and there seemed to be little attention being paid to dealing/preventing data breaches. This appeared to be the motivation behind the legislation for data breach notification (hoping that the resulting publicity would result in data breach mitigation measures and then reduction in identity theft).

Part of the difficulty with lots of the data breach measures is the prevalent use of dual-use of information required for business processes also being used for authentication.

We've commented in the past that (because of the diametrically opposing requirements) that even if the planet was buried under miles of information hiding encryption (as data breach countermeasure) ... it wouldn't still prevent data leakage ... because of dual-use requirement for the information (both readily available for all the business processes and authentication requirement keeping it confidential and never divulged).

The vast majority of the data breaches have been related to account numbers and account fraud flavor of identity theft. In the mid-90s, the x9a10 financial standard working group had been given the requirement to preserve the integrity of the financial infrastructure for all retail payments. The resulting x9.59 financial standard did nothing to directly address data breaches. However, x9.59 did change the paradigm and eliminated the account number for any form of authentication. As a result, skimming, data breach, security breaches, and other mechanisms related to harvested information from previous transactions were eliminated as a threat (i.e. breaches could still happen but the crooks couldn't use the information for fraudulent transactions).

The number two privacy issue was the denial of service issue. To address that, there was some legislation drafted regarding information sharing and opt-in ... but has not yet passed. Something similar was passed in GLB act, but was opt-out.

When we did the x9.99 financial industry privacy standard, we had to take into account HIPAA, GLBA, as well as EU-DPD

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

TJ Maxx - why are they still in business?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: TJ Maxx - why are they still in business?
Date: September 6th, 2008
Blog: Information Security
we had been brought in to help word smith the cal. state (and then federal) electronic signature legislation. Some of the groups were also heavily involved in privacy and had done detailed, in-depth, consumer privacy surveys. They found the two most important (privacy) issues were

1) identity theft (and particularly the sub-type, account fraud) 2) (gov/institutional) denial of service (based on personal information)

there was some evidence that there was connection between data breaches and identity theft, but there didn't seem to be a lot being done about. This appeared to be the motivation behind the data breach notification legislation (hoping that the publicity would result in data breach countermeasures leading to less identity theft).

Now, in the mid-90s, the X9A10 financial standard working group had been given the requirement to preserve the integrity of the financial infrastructure for all retail payments. Detailed, in depth, end-to-end, vulnerability and threat assessments where done that contributed to the resulting x9.59 financial standard (for all retail payments). X9.59 didn't directly do anything about data breaches, however it eliminated the ability of crooks being able to use the obtained information (from data breaches) for fraudulent purposes.

Some of the in-depth threat & vulnerability investigation:

1) security proportional to risk ... (in the current infrastructure) the value of the information to the merchant is basically proportional to some percentage of the profit from the transaction. However, the value of the information to the crooks is basically the account balance &/or credit-limit of the associated account ... this can easily be worth 100 times than the value of the information to the merchant. The result is that the crooks may be able to outspend by a factor of 100 times attacking the infrastructure (than the merchant can afford to spend defending the infrastructure). With such a huge mismatch between the attacker and the defender ... the attacker will frequently prevail.

2) the transaction information is effectively of a dual-use nature. the transaction information is required for a multitude of business processes and needs to be readily available. At the same time, information (in the transaction) is effectively being used for authentication ... and requires that the information be kept absolutely confidential and never be made available (to anybody, for instance it would preclude presenting a payment card at point-of-sale). We've periodically commented that even if the planet were buried under miles of information hiding encryption, the diametrically opposing requirements from the dual-use nature of the information would mean that it is impossible to prevent data leakage.

Part of the paradigm change of the X9.59 financial standard was eliminating the dual-use nature of the information (i.e. resulting in eliminating the value of the information to the crooks and therefor the motivation for data breaches).

various x9.59 references
https://www.garlic.com/~lynn/x959.html#x959

Now the number two item in the in-depth privacy survey was the "denial of service" characteristics. This seemed to be motivation in legislative drafts requiring "opt-in" for information sharing ... which have never passed. However some of this shows up in "opt-out" part of GLBA legislation. When we were doing X9.99 financial privacy standard, we had to take into account GLBA as well as various HIPAA provisions and stuff from EU-DPD

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

What are security areas to be addressed before starting an e-commerce transaction or setting up a portal?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: What are security areas to be addressed before starting an e-commerce transaction or setting up a portal?
Date: September 6th, 2008
Blog: E-Commerce
we had been brought in as consultants for a small client/server startup that wanted to do payment transactions on their server. they had also invented this thing called SSL which they wanted to use as part of the implementation. this is now frequently referred to as electronic commerce. Part of the issue was that several of the security assumptions were not followed thru for majority of the deployments.

later we had been brought in to help word smith the cal. state (and then federal) electronic signature legislation. Some of the groups were also heavily involved in privacy and had done detailed, in-depth, consumer privacy surveys. They found the two most important (privacy) issues were

1) identity theft (and particularly the sub-type, account fraud) 2) (gov/institutional) denial of service (based on personal information)

there was some evidence that there was connection between data breaches and identity theft, but there didn't seem to be a lot being done about. This appeared to be the motivation behind the data breach notification legislation (hoping that the publicity would result in data breach countermeasures leading to less identity theft).

After the work on electronic commerce, we were also asked to participate in the X9A10 financial standard working group which had been given the requirement to preserve the integrity of the financial infrastructure for all retail payments. Detailed, in-depth, end-to-end, vulnerability and threat assessments where done that contributed to the resulting x9.59 financial standard (for all retail payments). X9.59 didn't directly do anything about data breaches, however it eliminated the ability of crooks being able to use the obtained information (from data breaches) for fraudulent purposes.

Some of the in-depth threat & vulnerability investigation:

1) security proportional to risk ... (in the current infrastructure) the value of the information to the merchant is basically proportional to some percentage of the profit from the transaction. However, the value of the information to the crooks is basically the account balance &/or credit-limit of the associated account ... this can easily be worth 100 times than the value of the information to the merchant. The result is that the crooks may be able to outspend by a factor of 100 times attacking the infrastructure (than the merchant can afford to spend defending the infrastructure). With such a huge mismatch between the attacker and the defender ... the attacker will frequently prevail.

2) the transaction information is effectively of a dual-use nature. the transaction information is required for a multitude of business processes and needs to be readily available. At the same time, information (in the transaction) is effectively being used for authentication ... and requires that the information be kept absolutely confidential and never be made available (to anybody, for instance it would preclude presenting a payment card at point-of-sale). We've periodically commented that even if the planet were buried under miles of information hiding encryption, the diametrically opposing requirements from the dual-use nature of the information would mean that it is impossible to prevent data leakage.

Part of the paradigm change of the X9.59 financial standard was eliminating the dual-use nature of the information (i.e. resulting in eliminating the value of the information to the crooks and therefor the motivation for data breaches).

The majority use of SSL in the world today is for this thing called electronic commerce ... for the purpose of hiding the financial transaction information. One of the results of X9.59 financial standard was that it was no longer necessary to hide transaction information ... which effectively eliminates the majority of the requirement for SSL.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Blinkylights

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Blinkylights
Newsgroups: alt.folklore.computers
Date: Sat, 06 Sep 2008 15:08:13 -0400
Morten Reistad <first@last.name> writes:
All the large companies do extensive data mining using these databases. They couldn't do so if it was illegal, so they have seen to it that this is perfectly legal.

there are parties that do war-dialing ... looking for things like modems & faxes ... by dialing all possible numbers.

some numbers of companies (large & small) have been regularly fined for (illegally?) using such information for spam calls and spam faxes. although it does seem that congress has provided an exemption for political campaigns (otherwise illegally) using do-not-call lists (we weren't getting some types of political calls until after we had registered for do-not-call)

we had been called in to help word-smith the cal state (and later federal) electronic signature legislature.
https://www.garlic.com/~lynn/subpubkey.html#signature

some of the organizations involved were also heavily involved in privacy issues and had done extensive, in-depth, customer surveys on the subject. The two top matters were:

1) identity theft (and especially sub-category, account fraud) 2) (gov/institutional) denial of service (using personal information)

there had been some evidence for relationship between data breaches and identity theft ... which seemed to be the motivation behind the data breach notification legislation (assumption that the publicity would promote data breach countermeasures resulting in reducing the rate of identity theft).

there was some amount of legislation also drafted that would require "opt-in" for data sharing (which is the source for much of mined information) ... which has been heavily lobbied against and has yet to pass. However, there was somewhat related provisions in (federal) GLBA legislation providing provisions for "opt-out" with regard to data sharing (i.e. "opt-in" only allows for data sharing with explicit permission; "opt-out" allows for data sharing unless there has been specific request not to).

misc. recent related posts:
https://www.garlic.com/~lynn/2008m.html#66 With all the highly publicised data breeches and losses, are we all wasting our time?
https://www.garlic.com/~lynn/2008m.html#70 Why SSNs Are Not Appropriate for Authentication and when, where and why should you offer/use it?
https://www.garlic.com/~lynn/2008m.html#71 TJ Maxx - why are they still in business?

note GLBA was also involved in repealing Glass-Steagall, which has also been discussed as contributing to the current problems in the financial infrastructure ... some recent posts:
https://www.garlic.com/~lynn/2008l.html#42 dollar coins
https://www.garlic.com/~lynn/2008l.html#67 dollar coins
https://www.garlic.com/~lynn/2008l.html#70 dollar coins
https://www.garlic.com/~lynn/2008m.html#16 Fraud due to stupid failure to test for negativ

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Speculation ONLY

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Speculation ONLY
Newsgroups: alt.folklore.computers
Date: Sun, 07 Sep 2008 08:33:45 -0400
re:
https://www.garlic.com/~lynn/2008m.html#69 Speculation ONLY

and now for something slightly different (from a couple recent articles):

25 years of conventional evaluation of data analysis proves worthless in practice
http://www.uu.se/news/news_item.php?typ=pm&id=277

from above:
So-called intelligent computer-based methods for classifying patient samples, for example, have been evaluated with the help of two methods that have completely dominated research for 25 years. Now Swedish researchers at Uppsala University are revealing that this methodology is worthless when it comes to practical problems. The article is published in the journal Pattern Recognition Letters.

... snip ...

and ...
"Our main conclusion is that this methodology cannot be depended on at all, and that it therefore needs to be immediately replaces by Bayesian methods, for example, which can deliver reliable measures of the uncertainty that exists. Only then will multivariate analyses be in any position to be adopted in such critical applications as health care," says Mats Gustafsson.

... snip ...

and related:

Semantic Provenance for eScience: Managing the Deluge of Scientific Data
http://dsonline.computer.org/portal/site/dsonline/menuitem.9ed3d9924aeb0dcd82ccc6716bbe36ec/index.jsp?&pName=dso_level1&path=dsonline/2008/07&file=w4wsw.xml&xsl=article.xsl&

from above:
Provenance information in eScience is metadata that's critical to effectively manage the exponentially increasing volumes of scientific data from industrial-scale experiment protocols. Semantic provenance, based on domain-specific provenance ontologies, lets software applications unambiguously interpret data in the correct context. The semantic provenance framework for eScience data comprises expressive provenance information and domain-specific provenance ontologies and applies this information to data management.

... snip ...

what i do in for the RFC index
https://www.garlic.com/~lynn/rfcietff.htm

uses much simpler metadata for RFCs
https://www.garlic.com/~lynn/rfcterms.htm

and work on merge of various taxonomies and glossaries
https://www.garlic.com/~lynn/index.html#glosnote

in much earlier implementation (in work on knowledge management) we had done some work with organizing NIH's UMLS
http://www.nlm.nih.gov/research/umls/

for other drift ... recent posts mentioning NIH's NLM
https://www.garlic.com/~lynn/2008l.html#80
https://www.garlic.com/~lynn/2008m.html#6

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Fraud due to stupid failure to test for negative

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Fraud due to stupid failure to test for negative
Newsgroups: alt.folklore.computers
Date: Sun, 07 Sep 2008 09:36:00 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
business news program just "asked" what did the GSEs do wrong? ... and their immediate answer: they bought $5bil in (toxic) CDOs with $80mil of capital ... i.e. heavily leveraged -- not quite 100times (the potential $25bil bailout estimates for GSEs seems to be all holdings)

this seems penny-ante stuff compared to other institutions that have already taken approx. $500bil in write-downs and projection will eventually be $1-$2 trillion.


re:
https://www.garlic.com/~lynn/2008m.html#15 Fraud due to stupid failure to test for negative

and then reference to Buffett getting out of GSE because of their use of account methods
https://www.garlic.com/~lynn/2008m.html#17 Fraud due to stupid failure to test for negative

from today:

Fannie, Freddie Capital Concerns Prompt Paulson Plan
http://www.bloomberg.com/apps/news?pid=20601087
http://www.bloomberg.com/apps/news?pid=20601087&sid=at2rZoL11_sw&refer=home

from above:
Treasury Secretary Henry Paulson decided to take control of Fannie Mae and Freddie Mac after a review found the beleaguered mortgage-finance companies used accounting methods that inflated their capital, according to people with knowledge of the decision.

... snip ...

other articles

Why U.S. moved to take over mortgage giants
http://www.iht.com/articles/2008/09/07/business/fannie.php
Treasury Secretary Paulson's balancing act on Fannie and Freddie
http://www.latimes.com/business/la-fi-fannie7-2008sep07,0,2490258.story
Feds to Take Over Fannie, Freddie
http://www.time.com/time/nation/article/0,8599,1839294,00.html
Paulson readies the 'bazooka'
http://money.cnn.com/2008/09/06/news/economy/fannie_freddie_paulson.fortune/index.htm?postversion=2008090621
Cost of Fannie, Freddie move unknown, but likely in billions
http://money.cnn.com/2008/09/06/news/economy/Fannie_Freddie_rescue_cost/index.htm?postversion=2008090619

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

When risks go south: FM&FM to be nationalized

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: When risks go south: FM&FM to be nationalized
Date: September 7, 2008 10:02 AM
Blog: Financial Cryptography
re:
https://financialcryptography.com/mt/archives/001092.html

recent reference to GSE documents obtained by washington post touting brilliant strategy buying toxic CDOs
https://www.garlic.com/~lynn/2008m.html#12 Fraud due to stupid failure to test for negative

reference to GSE having bought $5bil in toxic CDOs with $80 mil of capital (i.e. heavily leveraged ... not quite 100 times)
https://www.garlic.com/~lynn/2008m.html#15 Fraud due to stupid failure to test for negative

and reference to Buffett getting out of GSE because of their use of accounting methods (after being largest Freddie shareholder in 2000 & 2001):
https://www.garlic.com/~lynn/2008m.html#17 Fraud due to stupid failure to test for negative

from today:

Fannie, Freddie Capital Concerns Prompt Paulson Plan
http://www.bloomberg.com/apps/news?pid=20601087
http://www.bloomberg.com/apps/news?pid=20601087&sid=at2rZoL11_sw&refer=home

from above:
Treasury Secretary Henry Paulson decided to take control of Fannie Mae and Freddie Mac after a review found the beleaguered mortgage-finance companies used accounting methods that inflated their capital, according to people with knowledge of the decision.

... snip ...

other articles

Why U.S. moved to take over mortgage giants
http://www.iht.com/articles/2008/09/07/business/fannie.php
Treasury Secretary Paulson's balancing act on Fannie and Freddie
http://www.latimes.com/business/la-fi-fannie7-2008sep07,0,2490258.story
Feds to Take Over Fannie, Freddie
http://www.time.com/time/nation/article/0,8599,1839294,00.html
Paulson readies the 'bazooka'
http://money.cnn.com/2008/09/06/news/economy/fannie_freddie_paulson.fortune/index.htm?postversion=2008090621
Cost of Fannie, Freddie move unknown, but likely in billions
http://money.cnn.com/2008/09/06/news/economy/Fannie_Freddie_rescue_cost/index.htm?postversion=2008090619

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Fraud due to stupid failure to test for negative

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Fraud due to stupid failure to test for negative
Newsgroups: alt.folklore.computers
Date: Sun, 07 Sep 2008 14:24:34 -0400
re:
https://www.garlic.com/~lynn/2008m.html#75 Fraud due to stupid failure to test for negative

and related thread from financial cryptography blog:
https://www.garlic.com/~lynn/2008m.html#76 When risks go south: FM&FM to be nationalized

and then it happened:

Feds unveil rescue plan for Fannie, Freddie
http://money.cnn.com/2008/09/07/news/companies/fannie_freddie/index.htm?postversion=2008090711
Paulson Engineers U.S. Takeover of Fannie, Freddie
http://www.bloomberg.com/apps/news?pid=20601103&sid=ajcw4yxxPGJ8&refer=news
Feds Take Over Fannie, Freddie
http://www.time.com/time/nation/article/0,8599,1839294,00.html
U.S. to take over mortgage giants Fannie Mae, Freddie Mac
http://www.usatoday.com/money/economy/housing/2008-09-07-fannie-freddie-plan_N.htm
US takes over key mortgage firms
http://news.bbc.co.uk/2/hi/business/7602992.stm
U.S. Bails Out Mortgage Giants
http://www.forbes.com/home/2008/09/07/fannie-freddie-bailout-biz-cx_lm_0907mortgage.html
Feds unveil rescue plan for Fannie, Freddie
http://money.cnn.com/2008/09/07/news/companies/fannie_freddie/index.htm?postversion=2008090714
How Fannie and Freddie came under full government control
http://money.cnn.com/2008/09/07/news/economy/velshi_comments/index.htm?postversion=2008090713

and ...

Treasury Extends Secured Credit Line to Federal Home Loan Banks
http://www.bloomberg.com/apps/news?pid=20601087
http://www.bloomberg.com/apps/news?pid=20601087&sid=af5C3f6SpmFo&refer=home

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

When risks go south: FM&FM to be nationalized

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: When risks go south: FM&FM to be nationalized
Date: September 7, 2008 03:27 PM
Blog: Financial Cryptography
re:
https://www.garlic.com/~lynn/2008m.html#76 When risks go south: FM&FM to be nationalized

Why U.S. moved on mortgage giants; Freddie Mac's books jolted inspectors
http://www.iht.com/articles/2008/09/07/business/fannie.php

from above:
The U.S. government's planned takeover of Fannie Mae and Freddie Mac came together hurriedly after advisers poring over the companies' books for the Treasury Department concluded that Freddie's accounting methods had overstated its capital cushion, according to regulatory officials briefed on the matter.

... snip ...

several weeks ago there was interview with Buffett who commented that after being the largest Freddie stockholder in 2000&2001, got out of GSEs because of their accounting methods.
https://www.garlic.com/~lynn/2008m.html#17 Fraud due to stupid failure to test for negative

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Blinkylights

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Blinkylights
Newsgroups: alt.folklore.computers
Date: Mon, 08 Sep 2008 08:10:02 -0400
Lawrence Statton <yankeeinexile@gmail.com> writes:
[1] It's actually NOT called Bellcore anymore -- some other company (SAIC?) runs it now.

i.e. SAIC bought bellcore
https://www.garlic.com/~lynn/2008m.html#48 Blinkylights

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Fraud due to stupid failure to test for negative

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Fraud due to stupid failure to test for negative
Newsgroups: alt.folklore.computers
Date: Mon, 08 Sep 2008 08:27:45 -0400
Morten Reistad <first@last.name> writes:
This is a de-facto bankrupcy, and it is probably the biggest in world history. Even compared to world GNP it overshadows the 1772 VOC crash. (Ducth West Indies Company bankrupcy) and is 10 times bigger than MCI/Worldcom and Enron put together. Limited companies, and the institution of bankrupcy, are a 13th century invention, so the fall of the Roman Empire is excluded from comparison.

Look closely at who are appointed caretakers. Those should be some of the best lawyers available. Anyone unknown will bring a scent of decaying fish to the place.


re:
https://www.garlic.com/~lynn/2008m.html#76 When risks go south: FM&FM to be nationalized
https://www.garlic.com/~lynn/2008m.html#78 When risks go south: FM&FM to be nationalized

earlier in the year there were comments that GSEs shouldn't be affected by the subprime credit crisis ... because they hadn't dealt (directly) in such mortgages.

then there were buffett's comments that he had been the largest freddie stockholder in 2000 & 2001 ... but then got out because of their accounting methods.

then it comes out that in addition to doing what they had been chartered to do (buy mortgages) ... they had also gotten into heavily leveraged buying of (initially triple-A rated) toxic CDOs. There is some mention that they may have over $5bill in toxic CDOs (that will have significant write down)

misc. past posts mentioning GSEs:
https://www.garlic.com/~lynn/2008j.html#76 lack of information accuracy
https://www.garlic.com/~lynn/2008k.html#1 dollar coins
https://www.garlic.com/~lynn/2008m.html#12 Fraud due to stupid failure to test for negative
https://www.garlic.com/~lynn/2008m.html#15 Fraud due to stupid failure to test for negative
https://www.garlic.com/~lynn/2008m.html#16 Fraud due to stupid failure to test for negative
https://www.garlic.com/~lynn/2008m.html#17 Fraud due to stupid failure to test for negative

however, that is somewhat small potatoes compared to the approx $500bil in toxic CDO writedowns that have already occurred in the industry ... and speculation that citigroup will win the toxic CDO writedown sweepstakes ... with already nearly $60bil in writedowns (and speculation about how much more toxic CDOs exists in the $1trillion that citigroup is carrying off-balance) ... misc. past references:
https://www.garlic.com/~lynn/2008g.html#44 Fixing finance
https://www.garlic.com/~lynn/2008h.html#3 America's Prophet of Fiscal Doom
https://www.garlic.com/~lynn/2008k.html#1 dollar coins
https://www.garlic.com/~lynn/2008k.html#12 dollar coins
https://www.garlic.com/~lynn/2008k.html#36 dollar coins
https://www.garlic.com/~lynn/2008k.html#41 dollar coins

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Fraud due to stupid failure to test for negative

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Fraud due to stupid failure to test for negative
Newsgroups: alt.folklore.computers
Date: Mon, 08 Sep 2008 08:55:37 -0400
re:
https://www.garlic.com/~lynn/2008m.html#80 Fraud due to stupid failure to test for negative

business news shows this morning ... GSE problem is because of the enormously corrupt congressional lobbyist infrastructure in washington ... recent reference
https://www.garlic.com/~lynn/2008m.html#49 Taxes
https://www.garlic.com/~lynn/2008m.html#50 Taxes

that GSEs had the largest and most powerful lobby and managed to "block congressional action to correct the problems" (i.e. some are coming out and saying most of the blame for the current GSE situation falls on congress).

some statements that housing market correction values are reset to approx. 2000 (when the big infusion of toxic CDO funds fueling speculation) ... which somewhat corresponds to regions that saw the largest balloon in values (during this decade) ... are also tending to see the biggest reset in values.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Data sharing among Industry players about frauds

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Data sharing among Industry players about frauds
Date: September 8th, 2008
Blog: Financial Crime Risk, Fraud and Security
critical infrastructure
https://en.wikipedia.org/wiki/Critical_Infrastructure_Protection

included provisions for theat & vulnerability isacs (information sharing) in the individual industry sectors ... including one for financial infrastructure.

one of the first major ISAC issues brought up (at least) in the financial sector critical infrastructure meetings, was whether the ISAC would be subject to the federal freedom of information act (worried that criminal elements could take advantage of FOI).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Fraud due to stupid failure to test for negative

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Fraud due to stupid failure to test for negative
Newsgroups: alt.folklore.computers
Date: Mon, 08 Sep 2008 13:15:33 -0400
greymaus <greymausg@mail.com> writes:
I have a dim memory of someone walking off with [BIGNUM] amount of bonus. May be incorrect, as we have enough doing that here without worrying about the U.S.

re:
https://www.garlic.com/~lynn/2008m.html#75 Fraud due to stupid failure to test for negative
https://www.garlic.com/~lynn/2008m.html#76 When risks go south: FM&FM to be nationalized
https://www.garlic.com/~lynn/2008m.html#77 Fraud due to stupid failure to test for negative
https://www.garlic.com/~lynn/2008m.html#78 When risks go south: FM&FM to be nationalized
https://www.garlic.com/~lynn/2008m.html#80 Fraud due to stupid failure to test for negative
https://www.garlic.com/~lynn/2008m.html#81 Fraud due to stupid failure to test for negative

the past discussions about "fixing" GSEs have always had comments about unlikely to happen because the GSE lobby has been one of the most powerful in washington (discussions around the subject almost as if there is symbiotic relationship between congress and the lobbyists) ... most recent (swirl of comments about corrupt congress and power of lobbyists)) was around Buffett's interview and his comments about getting out of the GSEs because of their accounting methods (after having been largest freddie shareholder in 2000/2001).

besides the individual [BIGNUM] amounts, there has been articles about aggregate bonuses at wall street financial houses being $137billion in the 2002-2007 runup to the current credit crisis; i.e. declaring enormous profits, a lot of it from the highly leveraged toxic CDOs ... resulting in the enormous bonuses ... now those enormous profits are being rebooked as enormous losses (approx. $500bil so far ... but projections to reach 1-2 $trillion) ... but it is unlikely that the enormous bonuses will be reclaimed.

this aspect has also been brought up as part of the moral hazard problem ... rewarding extremely risky behavior ... but there being no accountability and downside when things fail (either at a personal level or an institutional level)

misc. past posts mentioning moral hazard
https://www.garlic.com/~lynn/2008g.html#64 independent appraisers
https://www.garlic.com/~lynn/2008j.html#71 lack of information accuracy
https://www.garlic.com/~lynn/2008j.html#76 lack of information accuracy
https://www.garlic.com/~lynn/2008k.html#16 dollar coins
https://www.garlic.com/~lynn/2008l.html#51 Monetary affairs on free reign, but the horse has Boulton'd
https://www.garlic.com/~lynn/2008l.html#67 dollar coins

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

question for C experts - strcpy vs memcpy

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: question for C experts - strcpy vs memcpy
Newsgroups: bit.listserv.ibm-main
To: <ibm-main@bama.ua.edu>
Date: Mon, 08 Sep 2008 15:09:13 -0400
joarmc@SWBELL.NET (John McKown) writes:
If I am copying literal text into a char array, which do you think is better:

strcpy(dest,"LITERAL");

OR

memcpy(dest,"LITERAL",8);

?? I lean towards memcpy because the C run-time reference says that it is a builtin function and done in-line. Which I would guess would mean better performance. Why don't I just look at the generated code? Because I don't have a C compiler for z/OS. I'm writing my code on Linux using GCC.


strcpy has been severely depreciated ... related to significant occurance of buffer overflow vulnerabiities in applications implemented with C programming language.

one reference

Secure programmer: Countering buffer overflows
http://www.ibm.com/developerworks/linux/library/l-sp4.html

lots of past posts mentioning buffer overflow vulnerabilities (including that they used to be the vast majority of all exploits)
https://www.garlic.com/~lynn/subintegrity.html#overflow

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Fraud due to stupid failure to test for negative

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Fraud due to stupid failure to test for negative
Newsgroups: alt.folklore.computers
Date: Tue, 09 Sep 2008 08:23:33 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
business news shows this morning ... GSE problem is because of the enormously corrupt congressional lobbyist infrastructure in washington ... recent reference
https://www.garlic.com/~lynn/2008m.html#49 Taxes
https://www.garlic.com/~lynn/2008m.html#50 Taxes


re:
https://www.garlic.com/~lynn/2008m.html#80 Fraud due to stupid failure to test for negative
https://www.garlic.com/~lynn/2008m.html#81 Fraud due to stupid failure to test for negative

WSJ finds someone to blame.... be skeptical, and tell the WSJ to grow up.
https://financialcryptography.com/mt/archives/001095.html

from above:

Fannie Mae's Patron Saint
http://online.wsj.com/article/SB122091796187012529.html?mod=googlenews_wsj

and from the article:
In 2000, then-Rep. Richard Baker proposed a bill to reform Fannie and Freddie's oversight. Mr. Frank dismissed the idea, saying concerns about the two were "overblown" and that there was "no federal liability there whatsoever."

... snip ...

and ...
In January of last year, Mr. Frank also noted one reason he liked Fannie and Freddie so much: They were subject to his political direction. Contrasting Fan and Fred with private-sector mortgage financers, he noted, "I can ask Fannie Mae and Freddie Mac to show forbearance" in a housing crisis. That is to say, because Fannie and Freddie are political creatures, Mr. Frank believed they would do his bidding.

... snip ...

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

WSJ finds someone to blame.... be skeptical, and tell the WSJ to grow up

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: WSJ finds someone to blame.... be skeptical, and tell the WSJ to grow up.
Date: September 9, 2008 09:06 AM
Blog: Financial Cryptography
re:
https://financialcryptography.com/mt/archives/001095.html

also
https://www.garlic.com/~lynn/2008m.html#80 Fraud due to stupid failure to test for negative
https://www.garlic.com/~lynn/2008m.html#81 Fraud due to stupid failure to test for negative
https://www.garlic.com/~lynn/2008m.html#85 Fraud due to stupid failure to test for negative

On one side is the moral hazard argument ... that if extremely risky behavior is rewarded by bailouts ... then they never learn responsible behavior ("grow-up"?) and the bad behavior continues to gets worse (some comments that after the financial industry bailouts, there looms the US automobile industry bailouts).

On the other side there are references and hand-wringing that large GSE holders were pension funds and other countries (sovereign funds).

However, there was interview with Buffett a couple weeks ago where he mentioned having been Freddie's largest shareholder in 2000/2001 ... but got out of GSEs because of their accounting methods (prudent adults look behind the curtain)

All during this period there were comments that nothing would be done about GSEs (and their related accounting practices) because they had one of the most powerful lobbies in Washington.

There have been a number of programs and articles about extremely corrupt congress (seemingly symbiotic relationship between congress and the lobbies).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Fraud due to stupid failure to test for negative

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Fraud due to stupid failure to test for negative
Newsgroups: alt.folklore.computers
Date: Tue, 09 Sep 2008 10:55:24 -0400
greymaus <greymausg@mail.com> writes:
A firing squad would tighten affairs up quickly, shoot them all and let God sort out the blame. [Problem being, the crooks move in, strip out the assets, and move on, what is left is the people trying to sort out the mess]

related comment
https://www.garlic.com/~lynn/2008m.html#86 WSJ finds someone to blame.... be skeptical, and tell the WSJ to grow up.
and
https://www.garlic.com/~lynn/2008m.html#84 Fraud due to stupid failure to test for negative

the news shows have somewhat been between that if Buffett had been largest freddie shareholder in 2000/2001 and then got out of GSEs because of accounting methods ... and everybody knew about the problems ... and periodic comments all during the past 6-7 yrs that nothing would be fixed because the GSE lobby in washingston was so powerful ...

and repeat of
https://www.garlic.com/~lynn/2008m.html#49 Taxes
https://www.garlic.com/~lynn/2008m.html#50 Taxes

there was video of economists roundtable discussion about revising tax code to straight tax (and eliminating special provisions/deducations) ... benefits were

• significantly reduce the lobbying activity corrupting congress

• reduce tax code from over 60,000 pages to 400-500 pages (significant percent of current GNP is diverted to dealing with tax code, eliminating that complexity would be enormous boost to economy)

• lots of business vitality is diverted to optimize consequences of provisions in the tax code. eliminating that distraction would improve business competitiveness and also be boost to economy

biggest disagreement wasn't that the current tax environment & lobbying infrastructures significantly contributes to congressional corruption ... and that eliminating special tax provisions would go a long way to drastic reduction in lobbying and corresponding reduction in corruption ... the biggest disagreement was how long would it take for congressional corruption to reappear in other ways (what is cause and what is effect?)

there was footnote discussion that Ireland has come out against revision of the US tax code. the justification was that a lot of companies have moved business to Ireland because of the issues with the US tax code ... significantly improving the Irish economy.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Sustainable Web

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Sustainable Web
Date: 09 Sep 2008, 12:35 pm
Blog: Greater IBM Connection
Many of the early commercial time-sharing service bureaus in the 60s & early 70s were cp67 (virtual machine) and then later vm370 based. One of these provided (IBM user group) SHARE organization with online computer conference starting in 1976. This predated my getting blamed for online computer conferencing on the internal network in the late 70s and early 80s. It also predated bitnet and the bitnet listserv computer conferencing.

URLs to portions of recent ibm mainframe (ibm-main) computer conference on topic of early web servers ... i.e. first one outside europe/cern was on the SLAC VM system. It also has references decribing the early evolution of html and web technology.
https://www.garlic.com/~lynn/2008m.html#59
https://www.garlic.com/~lynn/2008m.html#60
https://www.garlic.com/~lynn/2008m.html#61
https://www.garlic.com/~lynn/2008m.html#62
https://www.garlic.com/~lynn/2008m.html#63
https://www.garlic.com/~lynn/2008m.html#64

note that the SLAC VM webserver provided an interface for querying their SPIRES database (i.e. predates all the CGI and other forms of non-static web data)

for another reference ... this is URL to portion of ibm mainframe (ibm-main) computer conference about early IBM related items ... this is picture of desk ornament commemorating the 1000th node on the internal network
https://www.garlic.com/~lynn/2008m.html#35

for the fun of it pictures of home setup between 77 & mid-80s (hardcopy ascii miniterm, 3101, and then ibm/pc)
https://www.garlic.com/~lynn/2008m.html#51

i'm still looking for old pictures of 2741 terminal setup used at home between Mar70 and 77.

--
40+ yrs virtualization experience, online at home since Mar70

Fraud due to stupid failure to test for negative

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Fraud due to stupid failure to test for negative
Newsgroups: alt.folklore.computers
Date: Tue, 09 Sep 2008 13:23:07 -0400
jmfbahciv <jmfbahciv@aol> writes:
I would be more interested in the VP levels. When DEC got sold, we knew exactly what would happen because of the VPs that stayed on. The ones, who had been interested in doing the business well, were let go.

in the early ... "early out" programs (paying employees to volunteer to leave) ... the employees that had highest likelyhood of getting new positions were the most likely to sign-up (the least likely to signup were those that were pretty sure nobody else would hire them). later, they modified the plan to require management approval (trying to halt the exodus of the most productive employees).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

z/OS Documentation - again

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: z/OS Documentation - again
Newsgroups: bit.listserv.ibm-main
To: <ibm-main@bama.ua.edu>
Date: Wed, 10 Sep 2008 14:13:51 -0400
edjaffe@PHOENIXSOFTWARE.COM (Edward Jaffe) writes:
Several cogent letters have been written to IBM articulating the need for POO in HTML. IBM is aware of the problem and claims to be working on a solution. (Probably Eclipse help format.) Off the record, I heard that the BookManager build process "broke" when the POO got too big and nobody really knows how to fix the program.

early POOs had been moved to cms script. cms script started off being a flavor of the ctss runoff command ... with "dot" formating commands ... but after gml was invented at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech
in 1969 ... gml tag formating support was added to cms script.
https://www.garlic.com/~lynn/submain.html#sgml

recent posts about gml evolving into sgml and also html:
https://www.garlic.com/~lynn/2008m.html#59 CHROME and WEB apps on Mainframe?
https://www.garlic.com/~lynn/2008m.html#60 CHROME and WEB apps on Mainframe?
https://www.garlic.com/~lynn/2008m.html#61 CHROME and WEB apps on Mainframe?
https://www.garlic.com/~lynn/2008m.html#62 CHROME and WEB apps on Mainframe?
https://www.garlic.com/~lynn/2008m.html#63 CHROME and WEB apps on Mainframe?
https://www.garlic.com/~lynn/2008m.html#64 CHROME and WEB apps on Mainframe?
https://www.garlic.com/~lynn/2008m.html#65 CHROME and WEB apps on Mainframe?

a major motivation for moving POO to cms script was 1) multiple machine-readable files and 2) to have command-line format option for the two different versions.

The complete version was the internal-only architecture "red-book" that had a whole lot of additional stuff that didn't appear in the POO subset. Having a single (machine-readable) document ... with the POO as a subset ... helped keep the related information in sync (i.e. the detailed architecture related information intermixed with the POO subset).

bookmanager is one of the (other) descendants of original cms script.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Blinkylights

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Blinkylights
Newsgroups: alt.folklore.computers
Date: Wed, 10 Sep 2008 14:02:07 -0400
sidd <sidd@situ.com> writes:
here are the data

home price percent changes year on year since 1920
http://www.economist.com/displayStory.cfm?story_id=11465476

home prices (integral of the previous graph) since 1975

carte would b 1976-1980. yes there was a runup and a decline i leave it to you to compare the magnitudes

didnt include a link for the second graf here is one place:
http://mysite.verizon.net/vodkajim/housingbubble/


I would contend that the current home price correction is more akin to the stock market crash of '29 ... than related to past home-owner market activity (i.e. the statistics merge and obfuscate two different activities).

The scenario is that the legislative, regulatory, and policy changes in the 90s, allowed the speculators to leverage unregulated triple-A rated toxic CDOs (for funds) and descend on the home-owner market and treat it (the home-owner market) like the 1920s unregulated stock market. The current bubble & correction looks more like speculation fervor than past home-owner activity.

An analogy is the cancerous cells have been impersonating healthy tissue. Legislative, regulatory and policy changes in the 90s effectively enabled the disease and prevented diagnoses before it reached a very advanced stage (delaying remedial treatment). With the cancerous cells so permeating the healthy tissue ... it is taking extreme measures to try and eradicate the disease ... and along the way there is extensive collateral damage in the healthy tissue. In fact, the cancerous cells appear to be trying to take advantage of the ambiguity and promote obfuscation.

the issue is that the speculation activity in the home market was allowed to become a significant part of all activity and there is now quite a bit of obfuscation and ambiguity regarding the significant speculation activity overlaying the traditional home buyer activity (and the reasons for the correction).

long-winded, decade old post mentioning some of the issues
https://www.garlic.com/~lynn/aepay3.htm#riskm

recent post suggesting that price correction might reset to the pre-speculation bubble
https://www.garlic.com/~lynn/2008m.html#81 Fraud due to stupid failure to test for negative

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Blinkylights

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Blinkylights
Newsgroups: alt.folklore.computers
Date: Wed, 10 Sep 2008 16:11:39 -0400
sidd <sidd@situ.com> writes:
while not discounting the role of toxic paper in the current debacle, there is also the huge growth in foreign holdings of treasury and agencies. i have seen it argued (unfortunately i cannot currently find the reference) that the real estate market was the only place large enuf to absorb the enormous flux of investment dollars from japan, china and of late the russians and oil states. those governments were certainly complicit in the real estate boom. the situation is exacerbated by the dollar pegs maintained by many of the countries, which may no longer be in their best interests.

re:
https://www.garlic.com/~lynn/2008m.html#91 Blinklights

one of the references was that GSEs instead of sticking to straight mortgages ... also got into triple-A rated toxic CDOs investments ... and that lots of sources then invested in GSEs.

There wasn't going to be a huge new explosion in standard (relatively safe) mortgages (that GSEs and others had been used to dealing directly with).

However, CDOs had been used two decades ago in the S&L crisis, to obfuscate the underlying values. So looking at it from the other side, there was a huge untapped market in subprime and speculation ... that investors wouldn't otherwise buying into ... because of the risk issues (part of the past reason that it was untapped ... and had little investment/funding was because the risk was unacceptable to most investors).

so which is the chicken and which is the egg? would all the money have come in, if the triple-A rated toxic CDOs hadn't been there to mask the risk ... or with all the money sitting around waiting for "safe" investments result in the creation of the triple-A rated toxic CDOs.

Packaging as triple-A rated toxic CDOs allowed the risk to be obfuscated and masked ... and opened up a way of channeling funds into that (risky) market ... from investors that would have, otherwise never put their money into so much risk.

There were two sides ... unregulated mortgage originators all of a sudden had access to nearly unlimited funds ... and could write unlimited number mortgages w/o regard to risk and quality ... and still be able to unload them as triple-A rated toxic CDOs (for more funds to write more such risky mortgages; w/o regard to quality and/or risk). There wasn't directly a huge investment market for risky, subprime &/or speculation mortgages ... but all of that changes when they are repackaged and risk is obfuscated as triple-A rated toxic CDOs.

On the other side, large number of investors found a huge new source of "safe" triple-A quality investments (either directly or indirectly ... like purchasing stock in institutions that were in triple-A rated toxic CDOs).

Part of this is past references to Buffett having been the largest freddie shareholder in 2000/2001 and then got out of GSEs because of their accounting methods ... and GSEs also doing heavily leveraged investments into CDOs ... outside of their nominal charter of directly dealing in (specific quality) mortgages.

the correction in the speculation & risky flavor of the market (which had been largely obfuscated with the use of triple-A rated toxic CDOs) ... is also having disastrous downside on the straight home-owner flavor of the market.

long-winded, decade-old post about several of the issues
https://www.garlic.com/~lynn/aepay3.htm#riskm

a few recent posts mentioning accounting methods:
https://www.garlic.com/~lynn/2008m.html#75 Fraud due to stupid failure to test for negative
https://www.garlic.com/~lynn/2008m.html#76 When risks go south: FM&FM to be nationalized
https://www.garlic.com/~lynn/2008m.html#78 When risks go south: FM&FM to be nationalized
https://www.garlic.com/~lynn/2008m.html#80 Fraud due to stupid failure to test for negative
https://www.garlic.com/~lynn/2008m.html#83 Fraud due to stupid failure to test for negative
https://www.garlic.com/~lynn/2008m.html#86 WSJ finds someone to blame.... be skeptical, and tell the WSJ to grow up
https://www.garlic.com/~lynn/2008m.html#87 Fraud due to stupid failure to test for negative

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

What do you think are the top characteristics of a good/effective leader in an organization? Do you feel these characteristics are learned or innate to an individual?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: What do you think are the top characteristics of a good/effective leader in an organization? Do you feel these characteristics are learned or innate to an individual?
Date: September 10th, 2008
Blog: Organizational Development
I had sponsored John Boyd's briefings in the 80s ... one that evolved during the period was titled Organic Design For Command and Control ... which spent most of the time comparing & contrasting "Command & Control" ... with what eventually he labeled "Appreciation and Leadership".

Part of the scenario was that a lot of the people that had been young officers in WW2 were starting to come into executive positions and beginning to apply their training as young officers. The issue was that at entry to WW2, the US needed to deploy a very large number of rapidly trained, inexperienced individuals.

In order to leverage the few available experienced individuals, an extremely rigid command&control infrastructure was created ... with basic premise that the top (experienced) people had to provide very detailed direction to large number of people who had no experience and didn't know what they were doing.

This philosophy that only the very top executives know what they are doing appears to permeate lots of organizations to this day. This explanation has also been used to explain news items about the current ratio of executive compensation to worker compensation is running 400:1 after having been 20:1 for a long period (and 10:1 in most of the rest of the world).

posts mentioning Boyd:
https://www.garlic.com/~lynn/subboyd.html#boyd
various URLs from around the web mentioning Boyd &/or OODA-loop
https://www.garlic.com/~lynn/subboyd.html#boyd2

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

How important, or not, is virtualization to cloud computing?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: How important, or not, is virtualization to cloud computing?
Date: September 10th, 2008
Blog: Enterprise Software
Cload computing is looking at making ambiguous the relationship between physical boxes and the associated data processing that goes on.

Virtualization has been leveraged to eliminate binding between physical boxes and the associated data processing that goes on.

With similar objectives (related to the binding of data processing to physical boxes), virtualization can be leveraged for cloud computing

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Blinkylights

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Blinkylights
Newsgroups: alt.folklore.computers
Date: Wed, 10 Sep 2008 23:05:12 -0400
Andrew Swallow <am.swallow@btinternet.com> writes:
I suspect that the major problem is that the funds were over valued. A $100b mortgage bundle with 10% failure rate was only worth $90b. Possibly $88b with costs.

re:
https://www.garlic.com/~lynn/2008m.html#91 Blinkylights
https://www.garlic.com/~lynn/2008m.html#92 Blinkylights

that presumably is part of how some have been buying them up at 22cents on the dollar (as part of toxic CDOs downgrade from their original triple-A ratings and half-trillion in write-downs) ... some recent refs:
https://www.garlic.com/~lynn/2008l.html#44 dollar coins
https://www.garlic.com/~lynn/2008l.html#67 dollar coins
https://www.garlic.com/~lynn/2008l.html#70 dollar coins

however, there are some number of institutions and operations that for one reason or another ... only deal in things with triple-A ratings (having triple-A rating on the toxic CDOs expanded the market that they could be sold to)

there is also the part of the story about toxic CDOs were used two decades ago in the S&L crisis as part of obfuscating the underlying value(s) ... i.e. part of getting a higher rating than the individual components deserve ... toxic CDOs complexity and obfuscation being leveraged to make the whole to appear to be much more than the sum of the parts ... possibly from having being born out of the junk bond culture.

long-winded, decade old post discussing some of the issues ... including problems with transparency/visibility into CDO-like instruments
https://www.garlic.com/~lynn/aepay3.htm#riskm

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Blinkylights

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Blinkylights
Newsgroups: alt.folklore.computers
Date: Thu, 11 Sep 2008 00:03:18 -0400
re:
https://www.garlic.com/~lynn/2008m.html#91 Blinkylights
https://www.garlic.com/~lynn/2008m.html#92 Blinkylights
https://www.garlic.com/~lynn/2008m.html#95 Blinkylights

quicky search engine use turns up quite a lot ... this is a fairly short article that is representative (from better than a year ago) ...

Subprime = Triple-A ratings? or 'How to Lie with Statistics' (gone 404 but lives on at the wayback machine)
https://web.archive.org/web/20071111031315/http://www.bloggingstocks.com/2007/07/25/subprime-triple-a-ratings-or-how-to-lie-with-statistics/

included in the above (mentions realization that the toxic CDOs wouldn't have a market w/o the triple-A rating)
This distortion brings me back to the subject of worthy reading material that came up in this week's Sunday Funnies: Barron's "The Art of Successful Investing," where I remembered (in the comments) the classic 1954 book by Darrell Huff, How to Lie with Statistics, which is still in print. Some of you may not agree with me that it is an investment book, but I would put it forward as a must-read. It is cleverly written, simple to understand, short and to the point. Illustrations by Irving Geis help to inform the reader quickly.

... snip ...

and ....

The $18 trillion unpaid price of financial alchemy
http://www.bloggingstocks.com/tag/AAA+ratings/

somewhat related thread:
https://www.garlic.com/~lynn/2008e.html#59 independent appraisers
https://www.garlic.com/~lynn/2008e.html#66 independent appraisers
https://www.garlic.com/~lynn/2008e.html#69 independent appraisers
https://www.garlic.com/~lynn/2008e.html#70 independent appraisers
https://www.garlic.com/~lynn/2008e.html#76 independent appraisers
https://www.garlic.com/~lynn/2008e.html#77 independent appraisers
https://www.garlic.com/~lynn/2008e.html#78 independent appraisers
https://www.garlic.com/~lynn/2008f.html#0 independent appraisers
https://www.garlic.com/~lynn/2008f.html#1 independent appraisers
https://www.garlic.com/~lynn/2008f.html#4 independent appraisers
https://www.garlic.com/~lynn/2008f.html#5 independent appraisers
https://www.garlic.com/~lynn/2008f.html#10 independent appraisers
https://www.garlic.com/~lynn/2008f.html#11 independent appraisers
https://www.garlic.com/~lynn/2008f.html#13 independent appraisers
https://www.garlic.com/~lynn/2008f.html#14 independent appraisers
https://www.garlic.com/~lynn/2008f.html#15 independent appraisers
https://www.garlic.com/~lynn/2008f.html#16 independent appraisers
https://www.garlic.com/~lynn/2008f.html#17 independent appraisers
https://www.garlic.com/~lynn/2008f.html#20 independent appraisers
https://www.garlic.com/~lynn/2008f.html#25 independent appraisers
https://www.garlic.com/~lynn/2008f.html#26 independent appraisers
https://www.garlic.com/~lynn/2008f.html#32 independent appraisers
https://www.garlic.com/~lynn/2008f.html#43 independent appraisers
https://www.garlic.com/~lynn/2008f.html#46 independent appraisers
https://www.garlic.com/~lynn/2008f.html#51 independent appraisers
https://www.garlic.com/~lynn/2008f.html#52 independent appraisers
https://www.garlic.com/~lynn/2008f.html#53 independent appraisers
https://www.garlic.com/~lynn/2008f.html#57 independent appraisers
https://www.garlic.com/~lynn/2008g.html#12 independent appraisers
https://www.garlic.com/~lynn/2008g.html#13 independent appraisers
https://www.garlic.com/~lynn/2008g.html#16 independent appraisers
https://www.garlic.com/~lynn/2008g.html#20 independent appraisers
https://www.garlic.com/~lynn/2008g.html#32 independent appraisers
https://www.garlic.com/~lynn/2008g.html#64 independent appraisers
https://www.garlic.com/~lynn/2008g.html#66 independent appraisers
https://www.garlic.com/~lynn/2008g.html#67 independent appraisers
https://www.garlic.com/~lynn/2008h.html#0 independent appraisers
https://www.garlic.com/~lynn/2008h.html#12 independent appraisers
https://www.garlic.com/~lynn/2008h.html#55 independent appraisers

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Blinkylights

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Blinkylights
Newsgroups: alt.folklore.computers
Date: Thu, 11 Sep 2008 09:04:11 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
The $18 trillion unpaid price of financial alchemy
http://www.bloggingstocks.com/tag/AAA+ratings/


re:
https://www.garlic.com/~lynn/2008m.html#96 Blinkylights

and from "The $18 trillion unpaid price of financial alchemy"
The upshot is that investors in Asia and Europe -- eager for higher returns (estimated at 22 basis points above treasury yields) and comforted by the AAA rating -- recycled the cash generated from record energy prices and trade surpluses with the U.S. into these CDOs.

... snip ...

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

what is the difference between web server and application server?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: what is the difference between web server and application server?
Date: September 11th, 2008
Blog: Software Development
a web server tends to be browser and html ... and can be for any sort of application. recent thread discussing the first webserver outside cern on the SLAC vm system
https://www.garlic.com/~lynn/2008m.html#59 CHROME and WEB apps on Mainframe?
https://www.garlic.com/~lynn/2008m.html#64 CHROME and WEB apps on Mainframe?

An application server would be an application specific server ... which may or may not work with a browser (i.e. a application server doesn't have to be done with a webserver and a webserver doesn't have to be application specific).

We had been brought in to consult with a small client/server startup that wanted to do payments transactions on their server. They had this technology they had invented called SSL they wanted to use and the result is now frequently called "electronic commerce".

Part of the implementation was something called a "payment gateway" ... that we've periodically referred to as the original/first SOA. It was an application server connected to the web that webservers communcated with as part of performing payment transactions ... however, it didn't have a web/browser interface. misc. past posts mentioning payment gateway effort:
https://www.garlic.com/~lynn/subnetwork.html#gateway

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Blinkylights

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Blinkylights
Newsgroups: alt.folklore.computers
Date: Thu, 11 Sep 2008 13:39:05 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
Subprime = Triple-A ratings? or 'How to Lie with Statistics' (gone 404 but lives on at the wayback machine)
https://web.archive.org/web/20071111031315/http://www.bloggingstocks.com/2007/07/25/subprime-triple-a-ratings-or-how-to-lie-with-statistics/


re:
https://www.garlic.com/~lynn/2008m.html#96 Blinkylights

the issues with toxic CDOs were at least well understood after their use two decades ago during the S&L crisis to obfuscate the underlying values.

aspects of the toxic CDO problem was also raised in this long-winded, decade-old post
https://www.garlic.com/~lynn/aepay3.htm#riskm

the current credit crisis (in large part fueled with funds from triple-A rated, toxic CDOs ... with the problems widely known and understood) ... was allowed to happen ... would strongly imply backing by powerful interests ... including fueling the home-owner market speculation (basically allowing the home-owners market to be treated like the unregulated 1920s stock market).
https://www.garlic.com/~lynn/2008m.html#91 Blinkylights

this has been motivation for past use of analogy with the emperor's new clothes parable
https://www.garlic.com/~lynn/2008j.html#20 dollar coins
https://www.garlic.com/~lynn/2008j.html#40 dollar coins
https://www.garlic.com/~lynn/2008j.html#60 dollar coins
https://www.garlic.com/~lynn/2008j.html#69 lack of information accuracy
https://www.garlic.com/~lynn/2008k.html#10 Why do Banks lend poorly in the sub-prime market? Because they are not in Banking!
https://www.garlic.com/~lynn/2008k.html#16 dollar coins
https://www.garlic.com/~lynn/2008k.html#27 dollar coins
https://www.garlic.com/~lynn/2008l.html#42 dollar coins
https://www.garlic.com/~lynn/2008m.html#4 Fraud due to stupid failure to test for negative
https://www.garlic.com/~lynn/2008m.html#12 Fraud due to stupid failure to test for negative

--
40+yrs virtualization experience (since Jan68), online at home since Mar70




previous, next, index - home