List of Archived Posts

2012 Newsgroup Postings (10/24 - 12/02)

history of Programming language and CPU in relation to each other
OT: Tax breaks to Oracle debated
OT: Tax breaks to Oracle debated
OT: Tax breaks to Oracle debated
Unintended consequence of RISC simplicity?
node names
Mainframes are still the best platform for high volume transaction processing
Beyond the 10,000 Hour Rule
Initial ideas (orientation) constrain creativity
OT: Tax breaks to Oracle debated
OT: Tax breaks to Oracle debated
Mainframes are still the best platform for high volume transaction processing
OT: Tax breaks to Oracle debated
Should you support or abandon the 3270 as a User Interface?
OT: Tax breaks to Oracle debated
OT: Tax breaks to Oracle debated
OT: Tax breaks to Oracle debated
Initial ideas (orientation) constrain creativity
other days around me
Assembler vs. COBOL--processing time, space needed
Assembler vs. COBOL--processing time, space needed
Assembler vs. COBOL--processing time, space needed
Assembler vs. COBOL--processing time, space needed
Does the IBM System z Mainframe rely on Security by Obscurity or is it Secure by Design?
Assembler vs. COBOL--processing time, space needed
Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
Why bankers rule the world
Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
IBM mainframe evolves to serve the digital world
Mainframes are still the best platform for high volume transaction processing
Regarding Time Sharing
Regarding Time Sharing
Does the IBM System z Mainframe rely on Obscurity or is it Security by Design?
360/20, was 1132 printer history
Regarding Time Sharing
Regarding Time Sharing
Regarding Time Sharing
Regarding Time Sharing
Assembler vs. COBOL--processing time, space needed
PC/mainframe browser(s) was Re: 360/20, was 1132 printer history
PC/mainframe browser(s) was Re: 360/20, was 1132 printer history
PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory
360/20, was 1132 printer history
Regarding Time Sharing
PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory
PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory
Random thoughts: Low power, High performance
PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory
PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory
PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory
What will contactless payment do to security?
Lotus: Farewell to a Once-Great Tech Brand
PC/mainframe browser(s) was Re: 360/20, was 1132 printer history
PC/mainframe browser(s) was Re: 360/20, was 1132 printer history
Thoughts About Mainframe Developers and Why I Was Oh So Wrong
360/20, was 1132 printer history
Reduced Symbol Set Computing
Regarding Time Sharing
ISO documentation of IBM 3375, 3380 and 3390 track format
ISO documentation of IBM 3375, 3380 and 3390 track format
ISO documentation of IBM 3375, 3380 and 3390 track format
ISO documentation of IBM 3375, 3380 and 3390 track format
ISO documentation of IBM 3375, 3380 and 3390 track format
Is it possible to hack mainframe system??
Random thoughts: Low power, High performance
ISO documentation of IBM 3375, 3380 and 3390 track format
ISO documentation of IBM 3375, 3380 and 3390 track format
kludge, was Re: ISO documentation of IBM 3375, 3380
PC/mainframe browser(s) was Re: 360/20, was 1132 printer history
Can Open Source Ratings Break the Ratings Agency Oligopoly?
bubble memory
Is orientation always because what has been observed? What are your 'direct' experiences?
AMC proposes 1980s computer TV series "Halt & Catch Fire"
These Two Charts Show How The Priorities Of US Companies Have Gotten Screwed Up

history of Programming language and CPU in relation to each other

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: history of Programming language and CPU in relation to each other
Newsgroups: alt.folklore.computers
Date: Wed, 24 Oct 2012 08:02:50 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
what x9.59 financial industry standard did ... some references
https://www.garlic.com/~lynn/x959.html#x959

was not anything regarding stopping leakage ... it just tweaked the paradigm so that if credit card numbers and/or information from previous financial transactions leaked ... the information couldn't be used by crooks to perform fraudulent transactions (information could be plastered all over public locations and it would be useless to the crooks).


re:
https://www.garlic.com/~lynn/2012n.html#67 history of Programming language and CPU in relation to each other
https://www.garlic.com/~lynn/2012n.html#75 history of Programming language and CPU in relation to each other

aka (from yesterday) ... this is eliminated as a threat. the current criminal objective is to use the stolen information for performing fraudulent transactions ... but x9.59 slightly tweaked the paradigm ... and the information that crooks can steal in this manner (skimming, but also breaches and other forms of leakage) is not sufficient for performing fraudulent transactions:

Hackers steal customer data from Barnes & Noble keypads; Point-of-sale terminals at 63 bookstores are found to have been modified to hijack customers' credit card and PIN information.
http://news.cnet.com/8301-1009_3-57538742-83/hackers-steal-customer-data-from-barnes-noble-keypads/

aka as mentioned, enormously reducing the attack surface:
https://www.garlic.com/~lynn/2012n.html#79 history of Programming language and CPU in relation to each other

--
virtualization experience starting Jan1968, online at home since Mar1970

OT: Tax breaks to Oracle debated

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OT:  Tax breaks to Oracle debated
Newsgroups: alt.folklore.computers
Date: Wed, 24 Oct 2012 13:58:15 -0400
Dan Espen <despen@verizon.net> writes:
Government giveaways are much more massive than most people realize. The overwhelming majority goes to businesses, the most notable the MIC, or MICC if you prefer.

Nothing puts people to work like building Nuclear Carriers, submarines, and missile systems. Don't have enough buyers, simple, give money to foreign militaries. Sort out the deaths and dictatorships later.

That's why my favorite proposed government program is flower gardens on the interstates. It puts massive numbers of people to work and everyone gets the benefit. Think of it, thousands of miles of roses, dahlias, daffodils, portulaca, daisies. The views will be spectacular, almost anyone can do the work and it's good for all of us. Even scientists can get into the act breeding new flowers.

It's even good crash protection. If you run off the road into plowed ground, you're going to slow down pretty quick.


"The Domestic Roots of Perpetual War"
http://chuckspinney.blogspot.com/p/domestic-roots-of-perpetual-war.html

also mentions version of above is Charpter 1 in Pentagon Labyrinth (on DOD precurement and financials):
http://chuckspinney.blogspot.com/p/pentagon-labyrinth.html

Annual Boyd conference at Quantico Marine Corps University week before last ... URLs from around the web & past posts mentioning Boyd
https://www.garlic.com/~lynn/subboyd.html

note that CBO reports that tax revenues were cut by $6T last decade and spending increased by $6T for $12T budget gap (compared to baseline which had all federal debt retired by 2010) ... much of it happening after congress let fiscal responsibility act expire in 2002 (required that spending & revenue match). Much of the reduction in revenues and spending increase has continued into this decade.

$2T of the $6T spending increase last decade went to DOD, $1T was appropriations for the wars ... analysis doesn't show where the other extra trillion has gone. Recent dire projections about looming DOD spending cuts actually involves cutting DOD back to 2007 spending levels.

I've periodically pontificated that Boyd was largely behind F20 tigershark ... as meeting many of his original design goals for the F16 ... before it too started to bloat. They figured that US military wouldn't buy ... but had designs on exports. It turned out that the F16 lobby moved in and got congress to appropriate foreign aid to all potential F20 buyers that was earmarked for F16 purchases (i.e. the foreign countries could only use the aid for buying F16s). While the F20 was significantly better for their purposes ... they were then faced with effectively getting F16s for free or having to use their own money for F20s.

--
virtualization experience starting Jan1968, online at home since Mar1970

OT: Tax breaks to Oracle debated

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OT:  Tax breaks to Oracle debated
Newsgroups: alt.folklore.computers
Date: Wed, 24 Oct 2012 18:48:55 -0400
Dan Espen <despen@verizon.net> writes:
Insane. How does it make sense for the US government to give money to foreign governments to buy weapons? I don't care if they are US weapons.

There are much better things the US government can subsidize, and it won't lead to some foreign government terrorizing their citizens and neighbors.


re:
https://www.garlic.com/~lynn/2012o.html#1 OT: Tax Breaks to Oracle debated

and part is subsidy for MICC that doesn't show up as part of the DOD budget

also support corrupt governments as part of plundering the country ... online
https://en.wikipedia.org/wiki/War_Is_a_Racket

more recent detail
https://www.amazon.com/Confessions-Economic-Hit-Man-ebook/dp/B001AFF266

recent posts mentioning "War Is a Racket" &/or "Economic Hit Man":
https://www.garlic.com/~lynn/2012.html#25 You may ask yourself, well, how did I get here?
https://www.garlic.com/~lynn/2012d.html#57 Study Confirms The Government Produces The Buggiest Software
https://www.garlic.com/~lynn/2012e.html#70 Disruptive Thinkers: Defining the Problem
https://www.garlic.com/~lynn/2012f.html#70 The Army and Special Forces: The Fantasy Continues
https://www.garlic.com/~lynn/2012j.html#81 GBP13tn: hoard hidden from taxman by global elite
https://www.garlic.com/~lynn/2012k.html#45 If all of the American earned dollars hidden in off shore accounts were uncovered and taxed do you think we would be able to close the deficit gap?
https://www.garlic.com/~lynn/2012l.html#58 Singer Cartons of Punch Cards
https://www.garlic.com/~lynn/2012l.html#62 Singer Cartons of Punch Cards
https://www.garlic.com/~lynn/2012l.html#97 What a Caveman Can Teach You About Strategy
https://www.garlic.com/~lynn/2012m.html#49 Cultural attitudes towards failure
https://www.garlic.com/~lynn/2012n.html#29 Jedi Knights
https://www.garlic.com/~lynn/2012n.html#60 The IBM mainframe has been the backbone of most of the world's largest IT organizations for more than 48 years
https://www.garlic.com/~lynn/2012n.html#83 Protected: R.I.P. Containment

--
virtualization experience starting Jan1968, online at home since Mar1970

OT: Tax breaks to Oracle debated

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OT:  Tax breaks to Oracle debated
Newsgroups: alt.folklore.computers
Date: Wed, 24 Oct 2012 21:10:26 -0400
Dan Espen <despen@verizon.net> writes:
But there are loads of people that don't have the strength, stamina, or smarts to build roads. A big chunk of them can operate a spade, poke a hole in the ground, stick a plant in it. Come back and weed and water.

re:
https://www.garlic.com/~lynn/2012n.html#84 OT: Tax breaks to Oracle debated
https://www.garlic.com/~lynn/2012o.html#1 OT: Tax breaks to Oracle debated
https://www.garlic.com/~lynn/2012o.html#2 OT: Tax breaks to Oracle debated

Volcker in discussion with civil engineering professor about significantly decline (for decades) in infrastructure projects (as institutions skimmed funds for other purposes & disappearing civil engineering jobs) resulting in universities cutting back civil engineering programs; "Confidence Men", pg290:
Well, I said, 'The trouble with the United States recently is we spent several decades not producing many civil engineers and producing a huge number of financial engineers. And the result is s**tty bridges and a s**tty financial system!

... snip ...

followup was that stimulas funds pumping money into infrastructure projects (in many case after decades of neglect) ... contracts were going to foreign companies that still had civil engineers.

past posts mentioning Volcker comment
https://www.garlic.com/~lynn/2011p.html#91 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2012.html#44 Who originated the phrase "user-friendly"?
https://www.garlic.com/~lynn/2012b.html#43 Where are all the old tech workers?
https://www.garlic.com/~lynn/2012c.html#63 The Economist's Take on Financial Innovation
https://www.garlic.com/~lynn/2012f.html#67 Cartons of Punch Cards
https://www.garlic.com/~lynn/2012g.html#30 24/7/365 appropriateness was Re: IBMLink outages in 2012
https://www.garlic.com/~lynn/2012g.html#48 Owl: China Swamps US Across the Board -- Made in China Computer Chips Have Back Doors, 45 Other "Ways & Means" Sucking Blood from US
https://www.garlic.com/~lynn/2012h.html#77 Interesting News Article

--
virtualization experience starting Jan1968, online at home since Mar1970

Unintended consequence of RISC simplicity?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Unintended consequence of RISC simplicity?
Newsgroups: comp.arch
Date: Fri, 26 Oct 2012 10:25:39 -0400
nmm1 writes:
Please do not confound over-simplication and cleanliness. I doubt that you could produce a concrete example where even excessive cleanliness, as such, has been a major source of mistakes.

I quite agree that the apparent excessive cleanliness of the early RISCs caused a lot of mistakes, but that was not due to that aspect. It was almost entirely due to passing over all of the hard problems to software, without providing the software with the primitives needed to handle them. That's over-simplification, and is an example of UNCLEAN design.


internal advanced technology conferencing mid-70s ... we were presenting 16-way 370 SMP and the 801 was presenting risc&cp.r.

the 801 group dissed the 16-way 370 SMP presentation saying that they had looked at the off-the-shelf operating system source and it didn't have support for 16-way 370 SMP. I mentioned something about writing the software.

The 801 group then presented risc ... it had significant simplifications, no hardware privilege protection, inverted virtual address tables, only sixteen virtual address segment registers. I pointed out the lack of any software (the amount of software i had to write to add 16-way SMP support was radically less than the total amount of software they had to write from scratch) and the limited virtual sharing available with only 16 segment registers.

they replied that risc/cp.r represented significant hardware/software trade-offs as part of achieving hardware simplification. cp.r would be close, proprietary system that compensated for lack of privilege hardware protection by only loading correctly compiled pl.8 "correct" programs for execution. also inline code would as easily switch segment register values as existing applications could switch contents of addressing registers (no kernel calls required for permission mode checking ... since all code would always be "correct").

801 romp chip & cp.r was targeted as follow-on for the opd displaywriter, when that was canceled ... the group decided to retarget the machine to unix workstation market ... which also required moving from cp.r&pl.8 to unix&c ... also adding hardware protection.

possibly for my earlier sin of criticizing limit of 16 segment registers, I got tasked with doing design of packing multiple (smaller) shared libraries into single 256mbyte 801 segment.

misc. past posts mentioning 801, iliad, risc, romp, rios, somerset, aim, fort knox, power, america, etc
https://www.garlic.com/~lynn/subtopic.html#801

as an aside, the 16-way SMP was gaining lots of acceptance until somebody happen to inform head of POK (high-end mainframe land), that it might be decades before the POK favorite son operating system had 16-way SMP support ... at which time, some number of people were invited to never visit POK again. misc. past posts mentioning SMP
https://www.garlic.com/~lynn/subtopic.html#smp

the company had recently come off the "future system" failure ... a period when 370 efforts were canceled (as possibly in competition with "future system") ... and was in mad rush to get products back into the 370 product pipelines (I've also periodically commented that I think a major factor in John doing 801/risc was to go to the opposite extreme of what was happening in FS). misc. past posts mentioning future system
https://www.garlic.com/~lynn/submain.html#futuresys

part of the mad rush was taking the 370/168-3 logic and remapping it to 20% faster chips ... for the 3033 ... initially for machine 20% faster than 168-3. The chips also had ten times the circuits as those used for 168-3 ... initially going unused. During the 3033 development cycle there was some redo of the logic to take advantage of higher circuit density and do more on chip ... eventually getting 3033 to 50% faster than 168-3. In any case, during the 16-way SMP activity, we co-opted some of the 3033 processor engineers to work on 16-way effort in their spare time(lot more interesting than what they were doing). Besides some of us getting invited to never visit POK again, the 3033 processor engineers were directed to totally focus on 3033 and don't let anybody distract them.

--
virtualization experience starting Jan1968, online at home since Mar1970

node names

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: node names
Newsgroups: alt.folklore.computers
Date: Fri, 26 Oct 2012 12:05:59 -0400
repost from "node name" thread in another fora:

an old post with list of machines at various institutions from summer of 1981
https://www.garlic.com/~lynn/2001l.html#61

I had been blamed for online computer conferencing on the internal network in the late 70s and early 80s. Then as Jim Gray was leaving San Jose Research for Tandem Oct 1980, he wrote "MIP ENVY" (commenting about corporate resources for internal users compared to other institutions) ... he was also palming off some number of things on me.

After he left, I would periodically go by and visit Jim at Tandem and write up trip report and distribute. Some of it would resonate with people and be redistribute and increasing number of people would send me comments ... which I would in turn add to growing distribution list. This collection of comments came to be known as "Tandem Memos" ... which resulted in some amount of turmoil inside the corporation and various reactions (folklore is that when the executive committee was told about online computer conferencing and the internal network, 5of6 wanted to fire me).

In any case, one of the results is the summer of 1981, there were sponsored visits to a number of institutions to gather information regarding the computing resources at those institutions. The post from 2001 has small bit from the resulting report about other institutions.

... my 20Sept1980 MIP ENVY version
https://www.garlic.com/~lynn/2007d.html#email800920
in this post
https://www.garlic.com/~lynn/2007d.html#17 Jim Gray is Missing

and both 20Sept1980 and 24Sept1980 version can be found here in Publications section:
http://research.microsoft.com/en-us/um/people/gray/

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframes are still the best platform for high volume transaction processing

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Date: 28 Oct 2012
Subject: Mainframes are still the best platform for high volume transaction processing.
Blog: Mainframe Experts
re:
http://lnkd.in/RsuUyw
and
https://www.garlic.com/~lynn/2012n.html#72 Mainframes are still the best platform for high volume transaction processing

i86 technology is that for the past few generations that chips have moved to risc cores with hardware layer that translates i86 instructions to risc micro-ops. this has appeared to negate the performance advantage that risc processors have had over i86 for decades. even z196 started to get more risc'y the claim is that much of the z196 processor improvement over z10 is the introduction of out-of-order execution ... something that risc has had for decades. zec12 is described as having further improvements in out-of-order execution ... further improving performance

• 2005 Z9 17.8BIPS 54processor 330MIPS/processor
• 2008 Z10 30BIPS 64processor 460MIPS/processor
• 2010 Z196 50BIPS 80processor 625MIPS/processor
• 2012 zec12 75BIPS 101processor 743MIPS/processor

basically benchmark is number of iterations compared to iterations on 370/158 & vax780 ... both considered to be 1MIP machines
https://en.wikipedia.org/wiki/Instructions_per_second

• 2003 INTEL 9.7BIPS
• 2005 AMD 14.5BIPS
• 2006 AMD 20BIPS
• 2006 INTEL 49BIPS
• 2008 INTEL 82BIPS
• 2009 AMD 78BIPS
• 2010 INTEL 147BIPS
• 2011 AMD 109BIPS
• 2011 INTEL 178BIPS

part of the innovation and drive for increased performance is attributed to two different vendors competing in the same chip market.

e5-2690 two socket, 8cores/chip
http://www.istorya.net/forums/computer-hardware-21/485176-intel-xeon-e5-2690-and-e5-2660-8-core-sandy-bridge-ep-review.html

Dhrystone
E5-2690 @2.9GHZ 527.55BIPS
E5-2660 @2.2GHZ 428.15BIPS
X5690 @3.45GHZ 307.49BIPS
i7-3690 @4.78GHZ 288BIPS
AMD 6274 @2.4GHZ 272.73BIPS

Whetstone
E5.2690 @2.9GHZ 315GFLOPS
E5-2660 @2.2GHZ 263.7GFLOPS
X5690 @3.4GHZ 227GFLOPS
i7-3690 @4.78GHZ 176GFLOPS
AMD 6274 @2.4GHZ 168.11GFLOPS

e5-4600 is a four socket, 8core/chip, for 32cores total ... there are references on various sites about vendors with e5-4600 blades in same form factor as e5-2600 and some references that e5-4600 should come in around 1000BIPS ... compared to 527BIPS for e5-2600.

There are numerous articles about big cloud vendors having standardized on e5-2600 blades. Numerous Amazon articles that customers can do on-demand "renting" of large number of e5-2600 blades doing supercomputer calculations for a couple hrs ... setup & scheduling doesn't even require talking to an Amazon employee. The aggregate of some of these "on-demand" supercomputers exceed the total aggregate processing power of all mainframes in the world today.

Recent article about Google mega-datacenters and their current standard server (e5-2600 blade) is that it has twenty times the processing power of their previous standard blade. That would roughly correspond to current 527BIPS e5-2600 and 2006 era 20-30 BIPS i86 blade.

more discussion of the big cloud mega-datacenters in this x-over mainframe discussion in (linkedin) Enterprise Systems group
http://lnkd.in/NBbbzr
and
https://www.garlic.com/~lynn/2012n.html#69 Under what circumstances would it be a mistake to migrate applications/workload off the mainframe?
https://www.garlic.com/~lynn/2012n.html#70 Under what circumstances would it be a mistake to migrate applications/workload off the mainframe?

part of the reference is that the i86 server chip lines and the i86 consumer chip lines are different ... possibly resulting in confusion about the server chip capabilities. also that the large cloud mega-datacenters are starting to drive some amount of the i86 server characteristics ... that they have emerged as major customer of i86 server chips. They have also taken to doing their own custom blade assemblies ... claiming 1/3rd the cost of equivalent blades from brand name vendors (like ibm, hp, dell, etc). The large cloud mega-datacenters have also been driving a lot of the innovation in power&cooling as well as total cost of ownership (as the drop in processing costs accelerates, the relative cost of other mega-datacenters operation starts to become bigger consideration).

and references for I/O ... originally from post in mainframe ibm-main mailing list (originated on bitnet in the 80s) ...
https://www.garlic.com/~lynn/2012m.html#3 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
and also here:
https://www.garlic.com/~lynn/2012m.html#13 Intel Confirms Decline of Server Giants HP, Dell, and IBM

addition of zhpf & tcw roughly a factor of 3 times throughput improvement over base FICON (addition of original FICON layer enormously reduced the thruput of the underlying fibre-channel ... dating back to 1988)
http://www-03.ibm.com/systems/z/hardware/connectivity/ficon_performance.html

peak z196 at 2M IOPS with 104 FICON channels, 14 storage subsystems and 14 system assist processors
ftp://public.dhe.ibm.com/common/ssi/ecm/en/zsw03169usen/ZSW03169USEN.PDF

however, there is mention that 14 system assist processors peak is 2.2M SSCH/sec running at 100% busy ... but recommendations are keeping SAPs at 70% or less (1.5M SSCH/sec).

reference to (single) fibre-channel for e5-2600 capable of over million IOPS (compared to z196 using 104 FICON channels to get to 2M IOPS)
http://www.emulex.com/artifacts/0c1f55d0-aec6-4c37-bc42-7765d5d7a70e/elx_wp_all_hba_romley.pdf

--
virtualization experience starting Jan1968, online at home since Mar1970

Beyond the 10,000 Hour Rule

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Date: 28 Oct 2012
Subject: Beyond the 10,000 Hour Rule
Blog: Boyd Strategy
re:
http://lnkd.in/fB_SZU
and
https://www.garlic.com/~lynn/2012n.html#78 Beyond the 10,000 Hour Rule
https://www.garlic.com/~lynn/2012n.html#82 Beyond the 10,000 Hour Rule

There is lots of stuff about predicting what other parties will do and whether or not you are correct. There is lots of discussion about predicting what the enemy will do and whether or not the predictions come true. Betrayal could be viewed from predicting what would a friend or ally do (as opposed to an enemy).

So the next level start with what is in the self-interest of the parties involved. This came up in the congressional hearings into the pivotal role that the rating agencies played in the economic mess in the last decade. The testimony was that in the 70s, the rating agency business process became "mis-aligned" ... when the rating agencies changed from the buyers paying for the ratings to the sellers paying for the ratings (i.e. the ratings were for the benefit of the buyers, but they became increasingly under control of the sellers).

The testimony was that during the last decade, the CDO sellers found that they could pay the rating agencies for triple-A rating ... even when both the CDO sellers and the rating agencies knew that the CDOs weren't worth triple-A. This also opened up the market for toxic CDOs to the large institutional investors that were restricted to only dealing in "safe" (triple-A) investments. This largely contributed to there being estimated $27 trillion in triple-A rated toxic CDOs done during the economic mess
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
https://www.bloomberg.com/news/articles/2008-10-27/evil-wall-street-exports-boomed-with-fools-born-to-buy-debt

In the S&L crisis, securitized mortgages (CDOs) had been used to obfuscate fraudulent mortgages ... but w/o triple-A rating, their market was severely limited. In the late 90s we were asked to look at improving the trust & integrity of the supporting documents in CDOs. However, by paying for triple-A ratings, the loan originators could start doing no-documentation loans (triple-A rating trumping documentation). With no documentation there was no longer an issue of their trust & integrity (this has come up recently with robo-signing scandal and the generation of fraudulent supporting documents).

As an aside, during the rating agency congressional hearings ... there were comments in the press that the rating agencies will likely be able to avoid federal prosecution with the threat of down-grading the federal gov. credit rating. As, for the other parties on wallstreet, they were making possibly aggregate 15% fees and commissions on the triple-A rated toxic CDO transactions flowing thru wallstreet (as mortgage market moved from traditional banking to walstreet CDO transactions). This change-over represented possibly $4T additional wallstreet income from the $27T in triple-A toxic CDO transactions during the period. This is separate from CDS gambling helping package triple-A toxic CDOs (predicted to fail), selling them to their customers, and then making CDS bets that they would fail.
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
https://www.bloomberg.com/news/articles/2008-10-27/evil-wall-street-exports-boomed-with-fools-born-to-buy-debt

recent posts mentioning $27T in toxic CDO transactions:
https://www.garlic.com/~lynn/2012.html#32 Wall Street Bonuses May Reach Lowest Level in 3 Years
https://www.garlic.com/~lynn/2012b.html#19 "Buffett Tax" and truth in numbers
https://www.garlic.com/~lynn/2012b.html#65 Why Wall Street Should Stop Whining
https://www.garlic.com/~lynn/2012b.html#95 Bank of America Fined $1 Billion for Mortgage Fraud
https://www.garlic.com/~lynn/2012c.html#30 US real-estate has lost $7T in value
https://www.garlic.com/~lynn/2012c.html#45 Fannie, Freddie Charge Taxpayers For Legal Bills
https://www.garlic.com/~lynn/2012c.html#46 PC industry is heading for more change
https://www.garlic.com/~lynn/2012c.html#54 PC industry is heading for more change
https://www.garlic.com/~lynn/2012d.html#32 PC industry is heading for more change
https://www.garlic.com/~lynn/2012d.html#42 China's J-20 Stealth Fighter Is Already Doing A Whole Lot More Than Anyone Expected
https://www.garlic.com/~lynn/2012e.html#23 Are mothers naturally better at OODA because they always have the Win in mind?
https://www.garlic.com/~lynn/2012e.html#40 Who Increased the Debt?
https://www.garlic.com/~lynn/2012e.html#58 Word Length
https://www.garlic.com/~lynn/2012f.html#31 Rome speaks to us. Their example can inspire us to avoid their fate
https://www.garlic.com/~lynn/2012f.html#63 One maths formula and the financial crash
https://www.garlic.com/~lynn/2012f.html#66 Predator GE: We Bring Bad Things to Life
https://www.garlic.com/~lynn/2012f.html#69 Freefall: America, Free Markets, and the Sinking of the World Economy
https://www.garlic.com/~lynn/2012f.html#75 Fed Report: Mortgage Mess NOT an Inside Job
https://www.garlic.com/~lynn/2012f.html#80 The Failure of Central Planning
https://www.garlic.com/~lynn/2012f.html#87 How do you feel about the fact that India has more employees than US?
https://www.garlic.com/~lynn/2012g.html#6 Adult Supervision
https://www.garlic.com/~lynn/2012g.html#20 Psychology Of Fraud: Why Good People Do Bad Things
https://www.garlic.com/~lynn/2012g.html#28 REPEAL OF GLASS-STEAGALL DID NOT CAUSE THE FINANCIAL CRISIS - WHAT DO YOU THINK?
https://www.garlic.com/~lynn/2012g.html#71 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012g.html#76 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012h.html#26 US economic update. Everything that follows is a result of what you see here
https://www.garlic.com/~lynn/2012h.html#32 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012h.html#75 Interesting News Article
https://www.garlic.com/~lynn/2012i.html#14 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012j.html#28 Why Asian companies struggle to manage global workers
https://www.garlic.com/~lynn/2012j.html#65 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012m.html#56 General Mills computer
https://www.garlic.com/~lynn/2012n.html#12 Why Auditors Fail To Detect Frauds?

--
virtualization experience starting Jan1968, online at home since Mar1970

Initial ideas (orientation) constrain creativity

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Date: 28 Oct 2012
Subject: Initial ideas (orientation) constrain creativity
Blog: Boyd Strategy
re:
http://lnkd.in/82KbHa

In (facebook "closed") Boyd&Beyond there is some implied reference that language might constrain innovation (no words for something that is original and unique for the first time). As I mentioned at B&B, Boyd somewhat addressed it in briefings when talking about constantly observing from every possible facet/perspective (which might be construed as being able to reset biases &/or preconceived idea).

facebook B&B has reference to Boyd writing in margin of book:

"Putting a thought into words represents the introduction of constraints and barriers."

in facebook B&B discussion, I referenced this
http://www.tempobook.com/2012/04/26/thinking-in-a-foreign-language/

where I reference "proficiency" in computer languages ... implication was that some things that were rather difficult in english and then translating to computer language ... became almost trivial if thinking directly in computer language.

possible hypothesis is that with sufficient focus and study ... anybody can become natural language proficient in foreign language ... and different languages may make various concepts easier (or harder) to deal with. A corollary is then with sufficient focus and study ... a person may become natural language proficient in math symbols & math operations (or computer languages).

Speed in OODA-loop is getting more & more proficient doing the same thing over & over. However, constantly observing from every possible facet & viewpoint may bring new concepts (rather than simply repeating existing concepts faster).

I have a different periodic technology rant about tools shaping the solutions & answers ... a particular case is RDBMS & SQL. I was in San Jose Research participating in the original RDBMS/SQL implementation .... misc. past posts on the original RDBMS/SQL implementation
https://www.garlic.com/~lynn/submain.html#systemr

part of the original RDBMS/SQL was making certain simplifications that help make ATM (cach machine) kind of transactions much more efficient (in part because those kinds of transactions might be considered early adopters) ... but required first forcing information for RDBMS into accounting record structure ... where all pieces of information have same, known, uniform structure.

About the same time, I also got sucked into helping implement another kind of relational structure ... that was much more generalized but much less efficient in doing financial kind of transactions. It also is much better at dealing with partial, incomplete, and anomalous data. RDBMS/SQL tends to result in unanticipated/unpredictable results when dealing with "NULLS" and "3-value logic" ... that there are frequently recommendations to avoid situations involving partial, incomplete, and/or anomalous data.

However, having done lots of work on many kinds of DBMS systems, I can see where RDBMS/SQL does extremely well and where it is totally inappropriate (but for people that have only dealt with RDBMS/SQL ... they sometimes assume that it is the only way for dealing with information ... and that information has to be prestructured to meet the RDBMS/SQL model).

related to my rant about RDBMS/SQL and not handling partial, incomplete and/or anomolous information

Anticipating Surprise: Analysis for Strategic Warning
https://www.amazon.com/Anticipating-Surprise-Analysis-Strategic-ebook/dp/B008H9Q5IW

loc129-32
Dictionary definitions of the word "indicate" refer to something less than certainty; an "indication" could be a sign, a symptom, a suggestion, or grounds for inferring or a basis for believing. Thus, the choice of this term in warning intelligence is a realistic recognition that warning is likely to be less than certain and to be based on information which is incomplete, of uncertain evaluation or difficult to interpret.

... snip ...

different starting points: 8 Psychological Insights That Could Prevent The Next Craisis
http://www.businessinsider.com/behavioral-science-and-the-financial-crisis-2012-10

recent post/reference over in facebook: Language is a Map
http://www.linkedin.com/today/post/article/20121029141916-16553-language-is-a-map

--
virtualization experience starting Jan1968, online at home since Mar1970

OT: Tax breaks to Oracle debated

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OT:  Tax breaks to Oracle debated
Newsgroups: alt.folklore.computers
Date: Sun, 28 Oct 2012 15:58:37 -0400
Dan Espen <despen@verizon.net> writes:
Meanwhile, the Iranians got to influence the outcome of an American election. It's no coincidence that the hostages came home Jan 20, 1981. Then Carter's successor sold arms to Iran. Also good for business.

re:
https://www.garlic.com/~lynn/2012n.html#84 OT: Tax breaks to Oracle debated
https://www.garlic.com/~lynn/2012o.html#1 OT: Tax breaks to Oracle debated
https://www.garlic.com/~lynn/2012o.html#2 OT: Tax breaks to Oracle debated
https://www.garlic.com/~lynn/2012o.html#3 OT: Tax breaks to Oracle debated

I think US was dealing with both sides.

this is account about Sadam wasn't worried about US intervention over the invasion of Kuwait because of the special relationship he had with the US (earlier publicly the executive branch was saying that Sadam wasn't going to invade Kuwait ... because Sadam said he wasn't) ... but Sadam miscalculated when he started marshaling forces along the Saudi border.

Long Strange Journey: An Intelligence Memoir
https://www.amazon.com/Long-Strange-Journey-Intelligence-ebook/dp/B004NNV5H2

loc2030-36
An unnamed Bush administration official actually went so far as to claim that the lack of Iraqi propaganda attacks on Saudi Arabia argued against an Iraqi invasion of the kingdom. Focusing solely on Iraqi statements was precisely the mistake made prior to the invasion of Kuwait. Saddam's public declaration that he desired a "dialogue" with Kuwait had been nothing more than a smoke screen behind which he'd hid his military preparations--just as Hitler had over 50 years before when he annexed Austria. I had a sick feeling that history was about to repeat itself.

... snip ...

then gets into the US special relationship with Saudi Arabia was different than the kind of relationship US had with Kuwait ... and a threat to Saudi Arabia was considered much more significant.

other recent posts mentioning "Long Strange Journey: An Intelligence Memoir"
https://www.garlic.com/~lynn/2012m.html#70 Long Strange Journey: An Intelligence Memoir
https://www.garlic.com/~lynn/2012n.html#10 Jedi Knights
https://www.garlic.com/~lynn/2012n.html#38 Jedi Knights
https://www.garlic.com/~lynn/2012n.html#83 Protected: R.I.P. Containment

"Jedi Knights" is part of thread about US News & World Report at the time of Desert Storm on John Boyd titled "The fight to change how America fights" mentioning Boyd's "Jedi Knights". other past posts mentioning the article:
https://www.garlic.com/~lynn/94.html#8 scheduling & dynamic adaptive ... long posting warning
https://www.garlic.com/~lynn/99.html#120 atomic History
https://www.garlic.com/~lynn/2002c.html#14 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2005j.html#15 The 8008
https://www.garlic.com/~lynn/2009o.html#37 Young Developers Get Old Mainframers' Jobs
https://www.garlic.com/~lynn/2009p.html#60 MasPar compiler and simulator
https://www.garlic.com/~lynn/2009q.html#38 The 50th Anniversary of the Legendary IBM 1401
https://www.garlic.com/~lynn/2010f.html#55 Handling multicore CPUs; what the competition is thinking
https://www.garlic.com/~lynn/2011j.html#14 Innovation and iconoclasm
https://www.garlic.com/~lynn/2011n.html#90 There is much we can learn from TE Lawrence

--
virtualization experience starting Jan1968, online at home since Mar1970

OT: Tax breaks to Oracle debated

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OT: Tax breaks to Oracle debated
Newsgroups: alt.folklore.computers
Date: Mon, 29 Oct 2012 14:47:55 -0400
"Charlie Gibbs" <cgibbs@kltpzyxm.invalid> writes:
What you're saying, I take it, is that they manage to keep their bureaucracies under control. If so, this is refreshing news, considering how many other areas where economies of scale have been breaking down. Mind you, governments aren't a good example to consider, since they are nothing but bureaucracies.

assuming relatively homogeneous ... doesn't necessarily hold if it is a lot of different stuff under common umbrella ... recent reference
https://www.garlic.com/~lynn/2012l.html#60 Singer Cartons of Punch Cards

early part of century ... reviewed periodic industry publication that gave avg. of regional financial institutions for several thousand measures compared to national financial instutions ... and regional institutions were slightly more profitable/efficient than national ... negating the justification for GLBA repeal of Glass-Steagall and the rise of too-big-to-fail (major motivation seems to be top executive compensation proportional to size of organization, not how well run it was). The too-big-to-fail can still be less efficient ... even if they have cut the bureaucratic "overhead" to the bone ... simply because of difficulty of scaling dissimilar operations.

other recent refs to industry publication
https://www.garlic.com/~lynn/2012.html#25 You may ask yourself, well, how did I get here?
https://www.garlic.com/~lynn/2012e.html#1 The Dallas Fed Is Calling For The Immediate Breakup Of Large Banks
https://www.garlic.com/~lynn/2012g.html#9 JPM LOSES $2 BILLION USD!
https://www.garlic.com/~lynn/2012g.html#84 Monopoly/ Cartons of Punch Cards

The too-big-to-fail seemed to also fostered a sense of privilege ... viewing that no matter how badly they perform, the gov. will come to their rescue ... and no matter how badly they act, they are also too-big-to-jail ... misc. recent posts mentioning too-big-to-jail:
https://www.garlic.com/~lynn/2012e.html#16 Wonder if they know how Boydian they are?
https://www.garlic.com/~lynn/2012e.html#35 The Dallas Fed Is Calling For The Immediate Breakup Of Large Banks
https://www.garlic.com/~lynn/2012e.html#37 The $30 billion Social Security hack
https://www.garlic.com/~lynn/2012f.html#88 Defense acquisitions are broken and no one cares
https://www.garlic.com/~lynn/2012g.html#9 JPM LOSES $2 BILLION USD!
https://www.garlic.com/~lynn/2012g.html#20 Psychology Of Fraud: Why Good People Do Bad Things
https://www.garlic.com/~lynn/2012i.html#14 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012j.html#25 This Is The Wall Street Scandal Of All Scandals
https://www.garlic.com/~lynn/2012k.html#37 If all of the American earned dollars hidden in off shore accounts were uncovered and taxed do you think we would be able to close the deficit gap?
https://www.garlic.com/~lynn/2012m.html#30 General Mills computer
https://www.garlic.com/~lynn/2012n.html#0 General Mills computer
https://www.garlic.com/~lynn/2012n.html#55 U.S. Sues Wells Fargo, Accusing It of Lying About Mortgages

past posts in this thread:
https://www.garlic.com/~lynn/2012n.html#84 OT: Tax breaks to Oracle debated
https://www.garlic.com/~lynn/2012o.html#1 OT: Tax breaks to Oracle debated
https://www.garlic.com/~lynn/2012o.html#2 OT: Tax breaks to Oracle debated
https://www.garlic.com/~lynn/2012o.html#3 OT: Tax breaks to Oracle debated
https://www.garlic.com/~lynn/2012o.html#9 OT: Tax breaks to Oracle debated

note that I periodic have a similar, but different rant about the effect of RDBMS/SQL technology. having participated in original implementation ... past posts
https://www.garlic.com/~lynn/submain.html#systemr

there were various simplifications that aided in the efficiency of ATM-type financial transactions ... but also forcing things towards common/homogeneous structure. The rise of RDBMS/SQL may be contributing to difficulty of scaling for dissimilar & non-homogeneous operations (i.e. the ability for dealing with complexity can be strongly influenced by the tools being used).

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframes are still the best platform for high volume transaction processing

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Date: 29 Oct 2012
Subject: Mainframes are still the best platform for high volume transaction processing.
Blog: Mainframe Experts
re:
http://lnkd.in/RsuUyw
and
https://www.garlic.com/~lynn/2012n.html#72 Mainframes are still the best platform for high volume transaction processing
https://www.garlic.com/~lynn/2012o.html#6 Mainframes are still the best platform for high volume transaction processing

See www.tpc.org for benchmarks

I was at San Jose Research, participated in the original relational/SQL implementation ... working with Jim Gray. When Jim left for Tandem ... he pawned off some number of things on me ... including consulting with the IMS group on DBMS technology and performance. Jim went on to pioneer business benchmarks which was the start of tpc.org:
https://www.tpc.org/information/who/gray5.asp

While IBM has done numerous industry standard benchmarks for their non-mainframe platforms ... it has been quite difficult to find information about mainframe business transaction industry standard benchmarks. In the past, I've found unofficial mainframe industry standard benchmark but there seems to efforts to get the numbers taken down whenever they appear. IBM will make periodic references like max configured zEC12 will perform 30% more DBMS than max configured z196 (even tho zEC12 has 50% more processing than z196).

My wife had been in the JES2/JES3 group and then was con'ed into going to POK to be in charge of (mainframe) loosely-coupled architecture where she did Peer-Coupled Shared Data architecture. Because of ongoing battles with the communication group over being forced to use SNA for loosely-coupled operation and little uptake (except for IMS hot-standby until parallel sysplex), she didn't remain long in the position.
https://www.garlic.com/~lynn/submain.html#shareddata

Later we did IBM's HA/CMP ... High Availability Cluster Multi-Processing project ... this early JAN1992 meeting (in Ellison's conference room) is reference to having 128-system high-availability cluster in production operation by YE1992 (using fibre-channel for storage subsystem sharing and access; technology that FICON was later layered on top of, which substantially reduced throughput compared to underlying fibre-channel performance)
https://www.garlic.com/~lynn/95.html#13
past posts mentioning ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

the mainframe DB2 group complain that if I was allowed to continue, I would be a minimum of five years ahead of them. I also coined the terms disaster survivability and geographic survivability when out marketing. Partially as a result, I was asked to author sections for the corporate continuous availability strategy document. However, both Rochester (as/400) and POK (mainframe) complained (that they couldn't meet the requirements) and my section was pulled. misc. past posts mentioning availability
https://www.garlic.com/~lynn/submain.html#available

In any case at the end of Jan1992, the scale-up effort was transferred to IBM Kingston and we were told we couldn't work on anything with more than four processors. A couple weeks later the scale-up was announced as IBM supercomputer for "numerical intensive only" ... not able to be used for commercial ... press reference
https://www.garlic.com/~lynn/2001n.html#6000clusters1 920217
and later in spring
https://www.garlic.com/~lynn/2001n.html#6000clusters2 920511

In the wake of all this, we decide to leave. More recent post "From The Annals Of Release No Software Before Its Time"
https://www.garlic.com/~lynn/2009p.html#43

IBM 2009 announce with HA/CMP rebranded pureScale benchmark running 100+ systems (not quite 20yrs after Jan1992 meetint)

This more recent post
https://www.garlic.com/~lynn/2012m.html#28

mentions finding a reference that implied the max configured z196 might handle 7.5m tpmC and using the 30% number of zEC12 DBMS ... that would come in approx. 9.74m tpmC. Current top tpmC is 30.25m with a cost of $1.01/tpmC
http://www.tpc.org/tpcc/results/tpcc_perf_results.asp

Also at z196 cost of $175M ... it would have something like $23/tpmC. There is a ibm power TPC benchmark of 10.4m tpmC and $1.38/tpmC. IBM also has a x3850 benchmark (with 2.4ghz e7-8870) 3m tpmC and $0.59/tpmC. The E7 is higher-end chip than the E5 and has somewhat higher cost per BIPS (possibly why you see large cloud vendors doing larger number of E5 blades).

disclaimer: as undergraduate in the 60s, the univ. library got an ONR grant to do online library catalog. Part of the money went for a 2321 datacell. The activity was also selected to be one of the betatest sites for the original CICS product (having been originally developed at a customer site) ... and I got tasked to debug&support the CICS installation (some number of teething problems for deployments in different environments). some past posts
https://www.garlic.com/~lynn/submain.html#cics

IBM E7 blade offerings
http://www.redbooks.ibm.com/abstracts/tips0843.html

E7-2800 (2 chip, 10cores/chip, 20processors)
E7-4800 (4 chip, 10cores/chip, 40processors)
E7-8800 (8 chip, 10cores/chip, 80processors)

IBM e5-2600 blade offerings
http://www.research.ibm.com/about/top_innovations_history.shtml

comparing 4-chip e5-4600 (8cores/chip) with 4-chip e7-4800 (10cores/chip)
http://bladesmadesimple.com/2012/06/4-socket-blade-serverwhich-intel-cpu-do-you-choose/

If you have memory intensive enterprise-level applications, you will want to use the Intel Xeon E7-4800 CPU. For General Purpose, or High Performance Compute work, go with the Intel Xeon E5-4600.

... snip ...

compares 2-chip e5-2600 with 4-chip e5-4600
http://www.theregister.co.uk/2012/05/14/intel_xeon_e5_2600_4600_server_chips/page3.html

for specint, 4chip is 1.96 times 2chip for specfp, 4chip is 1.88 times 2chip for virtualization, 4chip is 2.07 times 2chip

this gives ibm z9, z10, z196 specint & specfp
http://www.vm.ibm.com/education/lvc/lvc0929c.pdf

relative comparison of z10 to z9 and relative comparison of z196 to z10

official specint results
http://www.spec.org/cpu2006/results/cpu2006.html

lots of e5 results (including ibm e5-2690 at 55.4/59.6) but no actual mainframe results (to compare to other platforms)

--
virtualization experience starting Jan1968, online at home since Mar1970

OT: Tax breaks to Oracle debated

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OT: Tax breaks to Oracle debated
Newsgroups: alt.folklore.computers
Date: Sat, 03 Nov 2012 11:02:55 -0400
jmfbahciv <See.above@aol.com> writes:
I preferred using the toll highways when I drove between Mass. and Michigan. The only traffic to deal with was (usually) going in the same direction at (mostly) the same speed. Rt. 9 in Framingham area was so much worse.

first time drove into mass, was coming in on mass pike ... spent nearly a week driving i90 from the west coast. mass pike was the worst of any of the roads ... even worse than some country roads in the rockies.

later they said that toll was kept on mass pike long after original construction had been paid off ... and claim that original construction was so shoddy that help justify significant annual maintenance (jokes about using water soluble asphalt) ... and keeping the tolls in place to pay for the annual maintenance along with some amount of the funds siphoned off to other places.

they also managed to mention that one of the largest road building/maint. companies in mass ... also did large commercial buildings ... one that had just collapsed from some snow on the roof (some reference to large federal bldg. in cambridge possibly named for the head of that company)

i've made jokes about the frost heaves on mass pike in a.f.c. before ... aka appeared that mass pike had half the road bed depth of western mountain country roads that managed to handle frost heaves w/o problem.
https://www.garlic.com/~lynn/99.html#22 Roads as Runways Was: Re: BA Solves Y2K (Was: Re: Chinese Solve Y2K)
https://www.garlic.com/~lynn/2002i.html#28 trains was: Al Gore and the Internet
https://www.garlic.com/~lynn/2002i.html#35 pop density was: trains was: Al Gore and the Internet
https://www.garlic.com/~lynn/2002i.html#36 pop density was: trains was: Al Gore and the Internet
https://www.garlic.com/~lynn/2002j.html#42 Transportation
https://www.garlic.com/~lynn/2002j.html#68 Killer Hard Drives - Shrapnel?
https://www.garlic.com/~lynn/2003j.html#11 Idiot drivers
https://www.garlic.com/~lynn/2006h.html#45 The Pankian Metaphor
https://www.garlic.com/~lynn/2008l.html#24 dollar coins
https://www.garlic.com/~lynn/2008l.html#26 dollar coins
https://www.garlic.com/~lynn/2008l.html#27 dollar coins
https://www.garlic.com/~lynn/2008l.html#36 dollar coins

--
virtualization experience starting Jan1968, online at home since Mar1970

Should you support or abandon the 3270 as a User Interface?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Date: 04 Nov 2012
Subject: Should you support or abandon the 3270 as a User Interface?
Blog: Enterprise Systems
re:
http://lnkd.in/bMXsgv
and
https://www.garlic.com/~lynn/2012n.html#61 Should you support or abandon the 3270 as a User Interface?
https://www.garlic.com/~lynn/2012n.html#64 Should you support or abandon the 3270 as a User Interface?
https://www.garlic.com/~lynn/2012n.html#68 Should you support or abandon the 3270 as a User Interface?

Undergraduate at univ. in late 60s, i modified HASP to support tty/ascii terminals and crafted editor that supported original CMS editor syntax (pre-xedit) ... that traces back to CTSS ... for sort of CRJE type environment ... which i considered much better than TSO. misc past posts mentioning hasp
https://www.garlic.com/~lynn/submain.html#hasp

We were doing ibm's ha/cmp product and had this meeting early jan1992 on cluster scale-up in ellison's conference room ... referenced in this old post
https://www.garlic.com/~lynn/95.html#13

a couple weeks later the cluster scale-up stuff was transferred and we were told we couldn't work on anything with more than four processors (and a couple weeks later, it is announced as ibm supercomputer for scientific and numeric intensive *ONLY*) and we decide to leave. past posts mentioning ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

some of the other people in the ellison meeting move on and later show up at a small client/server startup responsible for something called the "commerce server" ... we are brought in as consultant's because they want to do payment transactions on their server. ... the startup had also invented this technology called "SSL" they wanted to use ... the result is now frequently called "electronic commerce". I have authority over everything between the servers and what is called the payment gateway (sits on the internet and handles commerce server payment transactions to the payment networks). misc. past post mentioning payment gateway (for electronic commerce)
https://www.garlic.com/~lynn/subnetwork.html#gateway

The lower-end non-internet point-of-sale terminals do dialup async. However, higher-end have leased lines and x.25. There is a lot of diagnostic in the point-to-point x.25 leased line modems and problem desk was expected to do first level problem determination within five minutes.

Some number of merchants moving from leased-line x.25 to internet were still expecting x.25 leased line diagnostic results. Early betatest had some situations crop that after 3hrs of problem determination, were closed as NTF (no trouble found). I had to do some amount of diagnostic software & procedure and a 40pg diagnostic manual to bring things close to five minute first level problem determination and eliminate lengthy, several hr, manual problem investigation that failed to identify the problem.

Disclaimer ... earlier my wife had done stint as chief architect for Amadeus (euro airline res system scaffolded off eastern's "system one") ... she didn't last very long because she went with x.25 ... and the SNA forces saw to it that she was replaced. It didn't do them much good because Amadeus went with x.25 anyway.

There is some sense that original arpanet with host protocol & IMPs was much more like SNA with much more structured environment (with SNA not even having a network layer). This is one of the reasons that I've claimed that the internal (non-SNA) internal network was larger than the arpanet/internet from just about the beginning until sometime late '85 or early '86 (internal network nodes having a form of gateway in nearly every node .... or at least the non-MVS nodes). The internet didn't get this until the great change-over to internetworking protocol (tcp/ip) that happened on 1january1983. some past posts mentioning internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

some past posts mentioning internet
https://www.garlic.com/~lynn/subnetwork.html#internet

One of the great downsides for the internal network was the change-over to SNA in the 87/88 time-frame ... when it would have been much more efficient, cost effective, etc ... to have switched to tcp/ip (something akin to what bitnet did for bitnet-II). Bitnet (& earn) was ibm sponsored university network ... using technology similar to that used for the internal network ... misc. past posts mentioning bitnet/earn
https://www.garlic.com/~lynn/subnetwork.html#bitnet

imagine an sna operation allowing billions of dynamically created network nodes arbitrarily connecting and disconnecting with no previous definition. SNA doesn't even have a network layer. The closest is APPN. Mid-80s, the person responsible for APPN and I reported to the same executive and I would periodically needle him about coming to work on real networking & that the SNA organization would never appreciate his efforts. When it came to announce APPN, the SNA organization strongly objected. The announcement was delayed nearly two months while the SNA objections were escalated ... finally APPN was announced but the announcement letter was carefully rewritten to avoid even implying any relationship between APPN and SNA.

Also about the same time, I was working with a baby bell that had implemented a SSCP/NCP spoofer on Series/1 with Series/1 emulating 37x5/NCP, simulating all resources as cross-domain but owned by the outboard networking ... and carrying all SNA traffic over a *REAL* networking environment. Part of a presentation that I made at fall '86 SNA architecture review board meeting in Raleigh
https://www.garlic.com/~lynn/99.html#67

afterwards there was all sorts of FUD generated by the SNA organization about the analysis ... but the 3275 details were all verified by the corporation officially sanctioned "configurator" on the world-wide sales&marketing support HONE system (and it was compared to real live running environment ... then part of regional bell after the split)

--
virtualization experience starting Jan1968, online at home since Mar1970

OT: Tax breaks to Oracle debated

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OT: Tax breaks to Oracle debated
Newsgroups: alt.folklore.computers
Date: Sun, 04 Nov 2012 11:21:28 -0500
Dave Garland <dave.garland@wizinfo.com> writes:
Add the people who came up with CDOs, pushed mortgages to people who they knew couldn't afford them, ran the Enron scams, the Madoffs, etc. Rather than being worthless, that's a crowd with massive negative worth.

as an aside ... lots of high-earning movies get officially listed as having losses ... creative bookkeeping & accounting rules ... lots of it aided and abetted by congress ... the stuff that allows private equity and hedge funds to classify income under special rule at half the tax rate, too-big-to-fail to carry trillions in triple-A rated toxic CDOs "off-book",

sarbanes-oxley was supposedly passed to *prevent* future Enrons & Worldcoms ... although there were already regulations that should have prevented them (all the uproar over SOX possibly considered as misdirection and obfuscation that things weren't really going to change).

Besides the testimony of the person that tried unsuccessfully for a decade to get SEC to do something about Madoff ... GAO apparently didn't think SEC was doing anything and started doing reports of fraudulent public company financial filings ... even showing uptic after SOX ... in theory SOX should have all the executives and auditors doing jailtime:
http://www.gao.gov/products/GAO-03-395R
http://www.gao.gov/products/GAO-06-678
https://www.gao.gov/products/gao-06-1079sp

While repeal of Glass-Steagall didn't create the toxic CDO financial mess ... it did allow the too-big-to-fail ... and when they played in the tripl-A rated toxic CDO (and off-balance creative bookkeeping) ... supposedly justified the public bailouts for their unethical/criminal behavior.

The repeal of Glass-Steagall, promoting Worldcom/Enron, and a good part of the severity of the economic mess were tightly intertwinded ... past recent detailing some of how it was intertwined
https://www.garlic.com/~lynn/2012c.html#31 US real-estate has lost $7T in value
https://www.garlic.com/~lynn/2012d.html#5 PC industry is heading for more change
https://www.garlic.com/~lynn/2012e.html#57 speculation
https://www.garlic.com/~lynn/2012g.html#59 Why Hasn't The Government Prosecuted Anyone For The 2008 Financial recession?
https://www.garlic.com/~lynn/2012g.html#77 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012i.html#94 Naked emperors, holy cows and Libor
https://www.garlic.com/~lynn/2012k.html#38 Four Signs Your Awesome Investment May Actually Be A Ponzi Scheme

and the major players on wallstreet and Washington players also tightly intertwined ... a number of recent secretary of treasury being former CEOs of the major wallstreet player ... and/or closely aligned with them ... along with too-big-to-fail.

I've mentioned Gerstner in competition to be next CEO of AMEX, guestner wins and the other person leaves ... takes his protege Jamie Dimon and goes off to Baltimore to take-over loan operation (some characterize as loan sharking). AMEX is in competition with KKR for RJR and KKR wins. KKR then hires away Gerstner to turn-around RJR. Early 90s, IBM goes into the red ... same year AMEX spins off its payment card outsourcer in the largest IPO up until then (first data). IBM board then brings in Guestner to resurrect IBM.

The looser for next CEO of AMEX reappears and takes over CITI in violation of Glass-Steagall. Greenspan gives him an exemption while he lobbies washington for the repeal of Glass-Steagall. After the repeal of Glass-Steagall ... the Sec. of treasury (former ceo at GS) leaves and becomes co-chairman at CITI. Next administration, the new sec. of treasury is also former CEO of GS ... all during the economic mess. The worse offender involved in triple-A rated toxic CDOs during economic mess is CITI (also having co-chairman that was former CEO of GS and former sec. of treasury).

After economic mess, new administration has sec. of treasury that was main regulator for wallstreet during economic mess (recent Sheila Bair also characterizes the current sec. of treasury, as protegee of former CEO of GS and former sec. of treasury that resigned after repeal glass-steagall to become co-chairman at CITI, Sheila Bair's recent book also characterizes this sec. of treasury as appearing to primarily acting in the best interests of CITI). "Confidence Men" reference that the economic A-team was instrumental in getting the current administration elected ... but when it came to appointments it was the economic B-team ... many members that were heavily involved in economic mess ... aka A-team was going to choose the "Swedish" solution (in the choice between swedish solution or the japan zombie bank solution) ... as well as holding those responsible accountable.

... recent posts mentioning Sheila Bair:
https://www.garlic.com/~lynn/2012.html#21 Zombie Banks
https://www.garlic.com/~lynn/2012d.html#25 PC industry is heading for more change
https://www.garlic.com/~lynn/2012f.html#14 Free $10 Million Loans For All! and Other Wall Street Notes
https://www.garlic.com/~lynn/2012g.html#61 Why Hasn't The Government Prosecuted Anyone For The 2008 Financial recession?
https://www.garlic.com/~lynn/2012h.html#5 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012h.html#25 US economic update. Everything that follows is a result of what you see here
https://www.garlic.com/~lynn/2012h.html#26 US economic update. Everything that follows is a result of what you see here
https://www.garlic.com/~lynn/2012h.html#37 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012h.html#58 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012h.html#64 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012m.html#57 General Mills computer
https://www.garlic.com/~lynn/2012m.html#58 General Mills computer
https://www.garlic.com/~lynn/2012m.html#61 General Mills computer
https://www.garlic.com/~lynn/2012m.html#63 General Mills computer
https://www.garlic.com/~lynn/2012n.html#55 U.S. Sues Wells Fargo, Accusing It of Lying About Mortgages
https://www.garlic.com/~lynn/2012n.html#57 Bull by the Horns

--
virtualization experience starting Jan1968, online at home since Mar1970

OT: Tax breaks to Oracle debated

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OT: Tax breaks to Oracle debated
Newsgroups: alt.folklore.computers
Date: Sun, 04 Nov 2012 11:28:53 -0500
jmfbahciv <See.above@aol.com> writes:
Don't remember that. Was it in Conn. or R.I.?

large federal blg. in cambridge ... DOT named for kennedy's sec of transportion
https://www.garlic.com/~lynn/2012o.html#12

heritage seemed to have extended to "big dig" ... some reports that it was 90% graft ... it was at least ten times the original estimate. there was some quote attributed to sen. kennedy that state of mass. deserved the economic stimulas. past posts mentioning big dig ... including claim that it was the most expensive highway project in the country
https://www.garlic.com/~lynn/2003i.html#25 TGV in the USA?
https://www.garlic.com/~lynn/2008k.html#73 Cormpany sponsored insurance
https://www.garlic.com/~lynn/2008r.html#41 fraying infrastructure
https://www.garlic.com/~lynn/2008r.html#56 IBM drops Power7 drain in 'Blue Waters'
https://www.garlic.com/~lynn/2009j.html#0 Urban transportation
https://www.garlic.com/~lynn/2009o.html#55 TV Big Bang 10/12/09
https://www.garlic.com/~lynn/2012b.html#11 The PC industry is heading for collapse
https://www.garlic.com/~lynn/2012b.html#14 The PC industry is heading for collapse

--
virtualization experience starting Jan1968, online at home since Mar1970

OT: Tax breaks to Oracle debated

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OT: Tax breaks to Oracle debated
Newsgroups: alt.folklore.computers
Date: Sun, 04 Nov 2012 22:30:50 -0500
Andrew Swallow <am.swallow@btinternet.com> writes:
Also fighting two wars whilst cutting the number of troops.

earlier post in thread mentioning $12T budget gap created last decade ... and tax revenue cuts and spending increases have momentum that continues
https://www.garlic.com/~lynn/2012o.html#1 OT: Tax breaks to Oracle debated

and that $2T of $6T increase was for DOD.

other recent posts mentioning $12T budget gap
https://www.garlic.com/~lynn/2012c.html#50 They're Trying to Block Military Cuts
https://www.garlic.com/~lynn/2012c.html#52 PC industry is heading for more change
https://www.garlic.com/~lynn/2012c.html#53 PC industry is heading for more change
https://www.garlic.com/~lynn/2012d.html#42 China's J-20 Stealth Fighter Is Already Doing A Whole Lot More Than Anyone Expected
https://www.garlic.com/~lynn/2012d.html#46 Is Washington So Bad at Strategy?
https://www.garlic.com/~lynn/2012d.html#53 "Scoring" The Romney Tax Plan: Trillions Of Dollars Of Deficits As Far As The Eye Can See
https://www.garlic.com/~lynn/2012d.html#60 Memory versus processor speed
https://www.garlic.com/~lynn/2012e.html#25 We are on the brink of historic decision [referring to defence cuts]
https://www.garlic.com/~lynn/2012e.html#40 Who Increased the Debt?
https://www.garlic.com/~lynn/2012e.html#58 Word Length
https://www.garlic.com/~lynn/2012f.html#31 Rome speaks to us. Their example can inspire us to avoid their fate
https://www.garlic.com/~lynn/2012f.html#61 Zakaria: by itself, Buffett rule is good
https://www.garlic.com/~lynn/2012f.html#68 'Gutting' Our Military
https://www.garlic.com/~lynn/2012f.html#81 The Pentagon's New Defense Clandestine Service
https://www.garlic.com/~lynn/2012f.html#88 Defense acquisitions are broken and no one cares
https://www.garlic.com/~lynn/2012g.html#6 Adult Supervision
https://www.garlic.com/~lynn/2012g.html#45 Fareed Zakaria
https://www.garlic.com/~lynn/2012h.html#5 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012h.html#6 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012h.html#25 US economic update. Everything that follows is a result of what you see here
https://www.garlic.com/~lynn/2012h.html#26 US economic update. Everything that follows is a result of what you see here
https://www.garlic.com/~lynn/2012h.html#27 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012h.html#30 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012h.html#33 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012h.html#50 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012h.html#61 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012h.html#68 Interesting News Article
https://www.garlic.com/~lynn/2012i.html#0 Interesting News Article
https://www.garlic.com/~lynn/2012i.html#41 Lawmakers reworked financial portfolios after talks with Fed, Treasury officials
https://www.garlic.com/~lynn/2012i.html#81 Should the IBM approach be given a chance to fix the health care system?
https://www.garlic.com/~lynn/2012k.html#37 If all of the American earned dollars hidden in off shore accounts were uncovered and taxed do you think we would be able to close the deficit gap?
https://www.garlic.com/~lynn/2012k.html#74 Unthinkable, Predictable Disasters
https://www.garlic.com/~lynn/2012k.html#79 Romney and Ryan's Phony Deficit-Reduction Plan
https://www.garlic.com/~lynn/2012l.html#55 CALCULATORS
https://www.garlic.com/~lynn/2012l.html#85 Singer Cartons of Punch Cards
https://www.garlic.com/~lynn/2012l.html#97 What a Caveman Can Teach You About Strategy
https://www.garlic.com/~lynn/2012m.html#33 General Mills computer

--
virtualization experience starting Jan1968, online at home since Mar1970

Initial ideas (orientation) constrain creativity

From: Anne & Lynn Wheeler <lynn@garlic.com>
Date: 28 Oct 2012
Subject: Initial ideas (orientation) constrain creativity
Blog: Boyd Strategy
re:
http://lnkd.in/82KbHa
and
https://www.garlic.com/~lynn/2012o.html#8 Initial ideas (orientation) constrain creativity

Steele has a recent Lakoff reference here
http://www.phibetaiota.net/2012/11/tom-atlee-systemic-causation-sandy/

first ran into Lakoff reference circa 1991 at MIT bookstore & his "woman, fire, and dangerous things"

Financial it comes up as "system risk" ... cascading events where a results are much more severe than the triggering event ... this is what was used to justify the bail-out of the too-big-to-fail ... which then results in "moral hazard" ... that the expectation of always being bailed out results in individuals taking ever increasing risks (huge upside w/o corresponding downside because of the expectation of bailouts).

Periodically referenced is relative minor european bank failing and unable to complete nightly settlement ... which then precipitates a while sequence of ever increasing failures because of cascading inability to "settle". One of the reason for funds held on deposit ... that can be used to complete settlement even in the case of failures.

This then takes us to resiliency ... extra resources and/or failure resistant processes .... ever increasing optimization ... removing resiliency ... where things continue to roll along as long as every single things operates perfectly. Such an environment is also prone to asymmetric attacks ... relative minor disruption that can cascades into serious consequences.

Disclaimer: I spent large part of my career doing business critical and high availability data processing systems ... including some amount of what is now called "electronic commerce". We were brought in as consultants to small client/server startup that wanted to do payment transactions on their server, they had also invented this technology called "SSL" they wanted to use. We had to do detailed studies to map the technology to the business process ... as well as some number of security requirements for deployment and use. Almost immediately many of the requirements were violated resulting in lots of the exploits we continue to see today. However on the backside we had much more tight control over what is called the "payment gateway" (sits on the internet and handles payment transactions between servers and the payment networks) ... which have had none of the exploits that are found all over the rest of the internet. misc. past posts mentioning payment gateway
https://www.garlic.com/~lynn/subnetwork.html#gateway

One scenario for the current environment is that it provides some small financial benefit to certain business interests .... but the cost of the consequences are spread over nearly every entity that makes use of the internet.

What comes first observe or orient ... seems to be analogous to how much of the brain wiring is determined by genetics and how much is developed/evolves from experience. In the brain case, one could claim that evolution is ongoing process selecting for starting point wiring for specific entity .... but is an ongoing iteration loop that could be considered going back millions of years ... making the case that it starts with observation ... and continues to iterate not only in the same individual but continues on across generations. Then bias might be considered slow-changing orientation that runs into trouble when the rate of change increases rapidly.

--
virtualization experience starting Jan1968, online at home since Mar1970

other days around me

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: other days around me
Newsgroups: alt.folklore.computers
Date: Tue, 06 Nov 2012 10:47:38 -0500
jmfbahciv <See.above@aol.com> writes:
That's possible. FEMA is considered an infinte pot of money when big messes happen and noone tends to do the required preventitive maintenance.

there was analysis that half of total federal flood insurance went to the same area in missippippi year after year ... even tho congress passed law decades ago that rebuilding on flood plain year after year was no longer eligible for federal flood insurance (somehow the law was ignored and funds continued to flow)

there was some reference to politician saying that missippippi *deserved* the funds (as sort of federal stimulus) ... analogous to quote attributed to kennedy that mass deserved the federal funds that went into the enormous "big dig" scam
https://www.garlic.com/~lynn/2012o.html#15 OT: Tax breaks to Oracle debated

--
virtualization experience starting Jan1968, online at home since Mar1970

Assembler vs. COBOL--processing time, space needed

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Assembler vs. COBOL--processing time, space needed
Newsgroups: alt.folklore.computers
Date: Fri, 09 Nov 2012 15:39:00 -0500
Dan Espen <despen@verizon.net> writes:
Wow, brush off the cobwebs!

I remember a site using some 3rd party optimizer. I was really impressed that "PERFORM" generated one instruction, "BAL" for the PERFORM.

I can't remember what the IBM compiler generated but it was more than one instruction.

S/360 presents a big problem with the limited range of a base register (4K) and a limited number of base registers (16). The optimizer has to figure out what areas of memory are referenced a lot and deserve a dedicated register and which areas are only referenced in exception cases and use temporary base registers for those areas.

An Assembler programmer knows that stuff because he understands the data flow of the program. The COBOL compiler can't get it right at compile time, even to today.


but it help some with the intro of virtual memory and 4k pages.

science center had done virtual machine cp40 in the mid-60s on 360/40 with hardware modifications supporting virtual memory ... which morphed into cp/67 when standard 360/67 with virtual memory became available.

in the 70s, science center did a whole lof of performance work, instrumenting cp67 & vm370 gathering data for decades on internal systems growing into hundreds; numerous simulation and modeling efforts ... eventually growing into capacity planning. one of the analytical models done in APL was made available on the internal world-wide sales&marketing support system ... local branch people could enter description of customer's workload and configuration and ask what-if questions about changes to workload &/or configuration.

One of the other tools was trace ... that tracked instruction and storage references and then did semi-automated program reorganization to optimize virtual memory operation (Bayesian cluster analysis). it was eventually made available spring 1976 as product called vs/repack.

it was used internally by lots of the 360 products (compilers, dbms, etc) as part of the transition from real-storage to virtual memory (i.e. dos, mft, mvt transitions to 370 virtual memory).

the register 4k addressing tended to help consolidate/compact storage use ... providing for slightly better virtual memory operation

--
virtualization experience starting Jan1968, online at home since Mar1970

Assembler vs. COBOL--processing time, space needed

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Assembler vs. COBOL--processing time, space needed
Newsgroups: alt.folklore.computers
Date: Fri, 09 Nov 2012 18:26:05 -0500
Anne & Lynn Wheeler <lynn@garlic.com> writes:
One of the other tools was trace ... that tracked instruction and sstorage references and then did semi-automated program reorganization to optimize virtual memory operation (Bayesian cluster analysis). it was eventually made available spring 1976 as product called vs/repack.

re:
https://www.garlic.com/~lynn/2012o.html#19 Assembler vs. COBOL--processing time, space needed

early version help with move of apl\360 to cms\apl

apl\360 had its own monitor and did its own workspace (real memory) swapping (typically configuration 16kbytes or 32kbytes max. workspace size)

move to cms\apl in (virtual memory) cp67/cms (virtual machine) allowed workspaces to be (demand paged) virtual address space size. This (and interface for accessing cms system services ... like file read/write) allowed real-world sized applications. business planning people in (armonk) corporate hdqtrs loaded the most valuable corporate information (detailed customer information) on the cambridge science center ... and did business modeling in cms\apl (this also required quite a bit of security because non-employees from varous boston area univ ... including students were using cambridge system). misc. past posts mentioning science center on 4th flr 545 tech sq
https://www.garlic.com/~lynn/subtopic.html#545tech

the problem came up for working in demand paged, virtual memory environment was apl\360 storage allocation ... allocated a new storage location on every assignment ... this continued until workspace storage was exhausted and then did garbage collection ... compacted in-use storage to single contiguous area ... and then it starts all over again.

the (vs/repack precursor) tool included doing printout of storage use traces ... would print on reverse white-side of green-bar paper ... time along horizontal ... storage location vertical ... storage (both instruction and data refs) was scaled to about 7ft length of print-out ... and time-line scaled about 30ft ... over several lengths of print-out taped together on internal hall in the science center.

apl\360 program would quickly alter every location in the available workspace memory ... looking like the sloped side of saw tooth and then garbage collect ... a solid vertical line (in print-out). with 16byte workspace where the whole thing was always swapped ... made no difference.

however, several mybyte virtual memory ... in quickly alters every page and then garbage collects ... the use of storage is based on the number of assignment operations ... independent the actaul aggregate storage in use at any one moment. in any case, had to redo the apl\360 storage management for virtual memory environment.

recent announcement that apl\360 source code available:
http://www.computerhistory.org/atchm/the-apl-programming-language-source-code/

The previously mentioned world-wide sales&marketing online HONE system started out with several virtual machine cp67 datacenters to give branch office technical people opportunity to practice operating system skills (in virtual machines) ... but it came to be dominated by sales/marketing (non-technical) support applications (mostly written in apl ... starting out with cms\apl). misc. past posts mentioning hone &/or apl
https://www.garlic.com/~lynn/subtopic.html#hone

misc. past posts mentioning cms\apl
https://www.garlic.com/~lynn/94.html#7 IBM 7090 (360s, 370s, apl, etc)
https://www.garlic.com/~lynn/97.html#4 Mythical beasts (was IBM... mainframe)
https://www.garlic.com/~lynn/99.html#38 1968 release of APL\360 wanted
https://www.garlic.com/~lynn/2000.html#69 APL on PalmOS ???
https://www.garlic.com/~lynn/2000c.html#49 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000g.html#27 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2000g.html#30 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2001c.html#2 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001i.html#43 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2001i.html#44 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2001l.html#6 mainframe question
https://www.garlic.com/~lynn/2001n.html#0 TSS/360
https://www.garlic.com/~lynn/2002b.html#43 IBM 5100 [Was: First DESKTOP Unix Box?]
https://www.garlic.com/~lynn/2002b.html#64 ... the need for a Museum of Computer Software
https://www.garlic.com/~lynn/2002c.html#28 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#30 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#45 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002e.html#50 IBM going after Strobe?
https://www.garlic.com/~lynn/2002f.html#30 Computers in Science Fiction
https://www.garlic.com/~lynn/2002h.html#60 Java, C++ (was Re: Is HTML dead?)
https://www.garlic.com/~lynn/2002h.html#67 history of CMS
https://www.garlic.com/~lynn/2002i.html#76 HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002j.html#2 HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002j.html#5 HONE, xxx#, misc
https://www.garlic.com/~lynn/2002j.html#37 HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002n.html#66 Mainframe Spreadsheets - 1980's History
https://www.garlic.com/~lynn/2002p.html#37 Newbie: Two quesions about mainframes
https://www.garlic.com/~lynn/2002q.html#47 myths about Multics
https://www.garlic.com/~lynn/2003c.html#16 Early attempts at console humor?
https://www.garlic.com/~lynn/2003c.html#18 Early attempts at console humor?
https://www.garlic.com/~lynn/2003f.html#15 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#21 "Super-Cheap" Supercomputing
https://www.garlic.com/~lynn/2003g.html#5 Any DEC 340 Display System Doco ?
https://www.garlic.com/~lynn/2003g.html#15 Disk capacity and backup solutions
https://www.garlic.com/~lynn/2003j.html#32 Language semantics wrt exploits
https://www.garlic.com/~lynn/2003n.html#8 The IBM 5100 and John Titor
https://www.garlic.com/~lynn/2003p.html#14 64 bits vs non-coherent MPP was: Re: Itanium strikes again
https://www.garlic.com/~lynn/2004.html#14 Holee shit! 30 years ago!
https://www.garlic.com/~lynn/2004b.html#58 Oldest running code
https://www.garlic.com/~lynn/2004c.html#7 IBM operating systems
https://www.garlic.com/~lynn/2004c.html#21 PSW Sampling
https://www.garlic.com/~lynn/2004c.html#47 IBM 360 memory
https://www.garlic.com/~lynn/2004d.html#55 If there had been no MS-DOS
https://www.garlic.com/~lynn/2004j.html#25 Wars against bad things
https://www.garlic.com/~lynn/2004j.html#28 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004j.html#53 history books on the development of capacity planning (SMF and RMF)
https://www.garlic.com/~lynn/2004m.html#54 Shipwrecks
https://www.garlic.com/~lynn/2004n.html#0 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#6 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#37 passing of iverson
https://www.garlic.com/~lynn/2004n.html#55 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#10 Multi-processor timing issue
https://www.garlic.com/~lynn/2004q.html#5 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#8 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#41 something like a CTC on a PC
https://www.garlic.com/~lynn/2005.html#44 John Titor was right? IBM 5100
https://www.garlic.com/~lynn/2005f.html#63 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005g.html#27 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005g.html#30 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005g.html#57 Security via hardware?
https://www.garlic.com/~lynn/2005h.html#1 Single System Image questions
https://www.garlic.com/~lynn/2005h.html#15 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005j.html#25 IBM Plugs Big Iron to the College Crowd
https://www.garlic.com/~lynn/2005j.html#54 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005j.html#62 More on garbage collection
https://www.garlic.com/~lynn/2005n.html#18 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#50 APL, J or K?
https://www.garlic.com/~lynn/2005o.html#5 Code density and performance?
https://www.garlic.com/~lynn/2005o.html#34 Not enough parallelism in programming
https://www.garlic.com/~lynn/2005o.html#38 SHARE reflections
https://www.garlic.com/~lynn/2005o.html#46 Article: The True Value of Mainframe Security
https://www.garlic.com/~lynn/2005p.html#20 address space
https://www.garlic.com/~lynn/2006.html#10 How to restore VMFPLC dumped files on z/VM V5.1
https://www.garlic.com/~lynn/2006b.html#23 Seeking Info on XDS Sigma 7 APL
https://www.garlic.com/~lynn/2006c.html#44 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006e.html#20 About TLB in lower-level caches
https://www.garlic.com/~lynn/2006h.html#14 Security
https://www.garlic.com/~lynn/2006j.html#24 virtual memory
https://www.garlic.com/~lynn/2006j.html#39 virtual memory
https://www.garlic.com/~lynn/2006k.html#30 PDP-1
https://www.garlic.com/~lynn/2006m.html#53 DCSS
https://www.garlic.com/~lynn/2006n.html#2 The System/360 Model 20 Wasn't As Bad As All That
https://www.garlic.com/~lynn/2006o.html#13 The SEL 840 computer
https://www.garlic.com/~lynn/2006o.html#23 Strobe equivalents
https://www.garlic.com/~lynn/2006o.html#30 The Question of Braces in APL-ASCII
https://www.garlic.com/~lynn/2006o.html#52 The Fate of VM - was: Re: Baby MVS???
https://www.garlic.com/~lynn/2006o.html#53 The Fate of VM - was: Re: Baby MVS???
https://www.garlic.com/~lynn/2006s.html#12 Languages that should have made it but didn't
https://www.garlic.com/~lynn/2006w.html#8 Why these original FORTRAN quirks?
https://www.garlic.com/~lynn/2006x.html#19 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2007b.html#32 IBMLink 2000 Finding ESO levels
https://www.garlic.com/~lynn/2007d.html#64 Is computer history taugh now?
https://www.garlic.com/~lynn/2007g.html#31 Wylbur and Paging
https://www.garlic.com/~lynn/2007g.html#48 The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007h.html#62 sizeof() was: The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007i.html#20 Does anyone know of a documented case of VM being penetrated by hackers?
https://www.garlic.com/~lynn/2007i.html#77 Sizing CPU
https://www.garlic.com/~lynn/2007j.html#13 Interrupts
https://www.garlic.com/~lynn/2007j.html#17 Newbie question on table design
https://www.garlic.com/~lynn/2007j.html#19 IBM Unionization
https://www.garlic.com/~lynn/2007j.html#65 Help settle a job title/role debate
https://www.garlic.com/~lynn/2007k.html#65 Non-Standard Mainframe Language?
https://www.garlic.com/~lynn/2007k.html#67 Non-Standard Mainframe Language?
https://www.garlic.com/~lynn/2007k.html#73 Non-Standard Mainframe Language?
https://www.garlic.com/~lynn/2007l.html#59 Scholars needed to build a computer history bibliography
https://www.garlic.com/~lynn/2007m.html#15 Patents, Copyrights, Profits, Flex and Hercules
https://www.garlic.com/~lynn/2007m.html#55 Capacity and Relational Database
https://www.garlic.com/~lynn/2007m.html#57 The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM
https://www.garlic.com/~lynn/2007o.html#45 Virtual Storage implementation
https://www.garlic.com/~lynn/2007o.html#53 Virtual Storage implementation
https://www.garlic.com/~lynn/2007q.html#23 GETMAIN/FREEMAIN and virtual storage backing up
https://www.garlic.com/~lynn/2007r.html#5 The history of Structure capabilities
https://www.garlic.com/~lynn/2007r.html#68 High order bit in 31/24 bit address
https://www.garlic.com/~lynn/2007s.html#33 Age of IBM VM
https://www.garlic.com/~lynn/2007s.html#41 Age of IBM VM
https://www.garlic.com/~lynn/2007t.html#71 T3 Sues IBM To Break its Mainframe Monopoly
https://www.garlic.com/~lynn/2007v.html#0 IBM mainframe history, was Floating-point myths
https://www.garlic.com/~lynn/2007v.html#48 IBM mainframe history, was Floating-point myths
https://www.garlic.com/~lynn/2007v.html#57 folklore indeed
https://www.garlic.com/~lynn/2008b.html#27 Re-hosting IMB-MAIN
https://www.garlic.com/~lynn/2008c.html#24 Job ad for z/OS systems programmer trainee
https://www.garlic.com/~lynn/2008d.html#32 Interesting Mainframe Article: 5 Myths Exposed
https://www.garlic.com/~lynn/2008d.html#35 Interesting Mainframe Article: 5 Myths Exposed
https://www.garlic.com/~lynn/2008f.html#36 Object-relational impedence
https://www.garlic.com/~lynn/2008h.html#7 Xephon, are they still in business?
https://www.garlic.com/~lynn/2008h.html#74 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008j.html#89 CLIs and GUIs
https://www.garlic.com/~lynn/2008m.html#36 IBM THINK original equipment sign
https://www.garlic.com/~lynn/2008m.html#42 APL
https://www.garlic.com/~lynn/2008m.html#61 CHROME and WEB apps on Mainframe?
https://www.garlic.com/~lynn/2008n.html#57 VMware renders multitasking OSes redundant
https://www.garlic.com/~lynn/2008o.html#66 Open Source, Unbundling, and Future System
https://www.garlic.com/~lynn/2008p.html#41 Automation is still not accepted to streamline the business processes... why organizations are not accepting newer technologies?
https://www.garlic.com/~lynn/2008p.html#42 Password Rules
https://www.garlic.com/~lynn/2008p.html#73 History of preprocessing (Burroughs ALGOL)
https://www.garlic.com/~lynn/2008q.html#48 TOPS-10
https://www.garlic.com/~lynn/2008q.html#59 APL
https://www.garlic.com/~lynn/2008r.html#18 Comprehensive security?
https://www.garlic.com/~lynn/2008r.html#40 Paris
https://www.garlic.com/~lynn/2008s.html#17 IBM PC competitors
https://www.garlic.com/~lynn/2009.html#0 Is SUN going to become x86'ed ??
https://www.garlic.com/~lynn/2009.html#6 mvs preemption dispatcher
https://www.garlic.com/~lynn/2009f.html#0 How did the monitor work under TOPS?
https://www.garlic.com/~lynn/2009f.html#18 System/360 Announcement (7Apr64)
https://www.garlic.com/~lynn/2009i.html#26 Why are z/OS people reluctant to use z/OS UNIX?
https://www.garlic.com/~lynn/2009i.html#36 SEs & History Lessons
https://www.garlic.com/~lynn/2009j.html#67 DCSS
https://www.garlic.com/~lynn/2009j.html#77 More named/shared systems
https://www.garlic.com/~lynn/2009l.html#1 Poll results: your favorite IBM tool was IBM-issued laptops
https://www.garlic.com/~lynn/2009l.html#43 SNA: conflicting opinions
https://www.garlic.com/~lynn/2009o.html#37 Young Developers Get Old Mainframers' Jobs
https://www.garlic.com/~lynn/2009p.html#33 Survey Revives Depate Over Mainframe's Future
https://www.garlic.com/~lynn/2009q.html#18 email
https://www.garlic.com/~lynn/2009r.html#17 How to reduce the overall monthly cost on a System z environment?
https://www.garlic.com/~lynn/2009r.html#68 360 programs on a z/10
https://www.garlic.com/~lynn/2010b.html#51 Source code for s/360
https://www.garlic.com/~lynn/2010c.html#28 Processes' memory
https://www.garlic.com/~lynn/2010c.html#35 Processes' memory
https://www.garlic.com/~lynn/2010c.html#54 Processes' memory
https://www.garlic.com/~lynn/2010c.html#55 Processes' memory
https://www.garlic.com/~lynn/2010c.html#89 Notes on two presentations by Gordon Bell ca. 1998
https://www.garlic.com/~lynn/2010d.html#27 HONE & VMSHARE
https://www.garlic.com/~lynn/2010d.html#59 LPARs: More or Less?
https://www.garlic.com/~lynn/2010d.html#62 LPARs: More or Less?
https://www.garlic.com/~lynn/2010e.html#14 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
https://www.garlic.com/~lynn/2010e.html#21 paged-access method
https://www.garlic.com/~lynn/2010e.html#22 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2010e.html#24 Unbundling & HONE
https://www.garlic.com/~lynn/2010g.html#20 16:32 far pointers in OpenWatcom C/C++
https://www.garlic.com/~lynn/2010g.html#64 16:32 far pointers in OpenWatcom C/C++
https://www.garlic.com/~lynn/2010h.html#17 LINUX on the MAINFRAME
https://www.garlic.com/~lynn/2010i.html#11 IBM 5100 First Portable Computer commercial 1977
https://www.garlic.com/~lynn/2010i.html#13 IBM 5100 First Portable Computer commercial 1977
https://www.garlic.com/~lynn/2010i.html#66 Global CIO: Global Banks Form Consortium To Counter HP, IBM, & Oracle
https://www.garlic.com/~lynn/2010j.html#17 Personal use z/OS machines was Re: Multiprise 3k for personal Use?
https://www.garlic.com/~lynn/2010j.html#48 Knuth Got It Wrong
https://www.garlic.com/~lynn/2010j.html#80 Idiotic programming style edicts
https://www.garlic.com/~lynn/2010l.html#15 Age
https://www.garlic.com/~lynn/2010n.html#10 Mainframe Slang terms
https://www.garlic.com/~lynn/2010n.html#17 What non-IBM software products have been most significant to the mainframe's success
https://www.garlic.com/~lynn/2010q.html#35 VMSHARE Archives
https://www.garlic.com/~lynn/2010q.html#51 Is email dead? What do you think?
https://www.garlic.com/~lynn/2011.html#28 Personal histories and IBM computing
https://www.garlic.com/~lynn/2011.html#82 Utility of find single set bit instruction?
https://www.garlic.com/~lynn/2011b.html#69 Boeing Plant 2 ... End of an Era
https://www.garlic.com/~lynn/2011d.html#59 The first personal computer (PC)
https://www.garlic.com/~lynn/2011e.html#8 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011e.html#58 Collection of APL documents
https://www.garlic.com/~lynn/2011f.html#3 History of APL -- Software Preservation Group
https://www.garlic.com/~lynn/2011f.html#51 US HONE Datacenter consolidation
https://www.garlic.com/~lynn/2011h.html#61 Do you remember back to June 23, 1969 when IBM unbundled
https://www.garlic.com/~lynn/2011i.html#39 Wondering if I am really eligible for this group. I learned my first programming language in 1975
https://www.garlic.com/~lynn/2011i.html#55 Architecture / Instruction Set / Language co-design
https://www.garlic.com/~lynn/2011j.html#48 Opcode X'A0'
https://www.garlic.com/~lynn/2011m.html#27 "Best" versus "worst" programming language you've used?
https://www.garlic.com/~lynn/2011m.html#37 What is IBM culture?
https://www.garlic.com/~lynn/2011m.html#61 JCL CROSS-REFERENCE Utilities (OT for Paul, Rick, and Shmuel)
https://www.garlic.com/~lynn/2011m.html#62 JCL CROSS-REFERENCE Utilities (OT for Paul, Rick, and Shmuel)
https://www.garlic.com/~lynn/2011m.html#69 "Best" versus "worst" programming language you've used?
https://www.garlic.com/~lynn/2011n.html#53 Virginia M. Rometty elected IBM president
https://www.garlic.com/~lynn/2011o.html#44 Data Areas?
https://www.garlic.com/~lynn/2012.html#7 Can any one tell about what is APL language
https://www.garlic.com/~lynn/2012.html#10 Can any one tell about what is APL language
https://www.garlic.com/~lynn/2012.html#14 HONE
https://www.garlic.com/~lynn/2012.html#50 Can any one tell about what is APL language
https://www.garlic.com/~lynn/2012b.html#6 Cloud apps placed well in the economic cycle
https://www.garlic.com/~lynn/2012b.html#43 Where are all the old tech workers?
https://www.garlic.com/~lynn/2012d.html#38 Invention of Email
https://www.garlic.com/~lynn/2012e.html#39 A bit of IBM System 360 nostalgia
https://www.garlic.com/~lynn/2012e.html#100 Indirect Bit
https://www.garlic.com/~lynn/2012f.html#40 STSC Story
https://www.garlic.com/~lynn/2012i.html#93 Operating System, what is it?
https://www.garlic.com/~lynn/2012j.html#82 printer history Languages influenced by PL/1
https://www.garlic.com/~lynn/2012k.html#8 International Business Marionette
https://www.garlic.com/~lynn/2012l.html#79 zEC12, and previous generations, "why?" type question - GPU computing

--
virtualization experience starting Jan1968, online at home since Mar1970

Assembler vs. COBOL--processing time, space needed

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Assembler vs. COBOL--processing time, space needed
Newsgroups: alt.folklore.computers
Date: Tue, 13 Nov 2012 22:09:36 -0500
hancock4 writes:
Ok, a PC's CPU is faster today, but what about I/O and the channels, and multiprogramming? The S/360 hardware was designed to handle all of that efficiently, how efficiently would a PC handle it all? Suppose we hung an early CICS and a few terminals off of the PC emulating a S/360--how would they be accommodated in terms of processing time?

Some years ago, wasn't comparative computer performance changed to throughput or wall clock time for a given problem as opposed to MIPS, to recognize I/O and other issues? Wasn't the S/370-158 used as a baseline for that?


s/370-158 & vax ... dhrystone 1mip
https://en.wikipedia.org/wiki/Instructions_per_second

e5-2690 @2.9GHZ 527.55BIPS (ratio of dhrystone iterations to 158/vax)

recent post discussing e5-2600 blade & current mainframes
https://www.garlic.com/~lynn/2012o.html#6 Mainframes are still the best platform for high volume transaction processing.

2005 Z9 17.8BIPS, 54processors 330MIPS/processor
2008 Z10 30BIPS, 64processors 460MIPS/processor
2010 Z196 50BIPS, 80processors 624MIPS/processor
2012 ZEC12 75BIPS, 101processors 743MIPS/processor

referenced post also discusses mainframe I/O thruput using "FICON" channels.

FICON is an ibm mainframe channel emulation layer on top of fibre-channel standard ... that drastically reduces the thruput compared to the underlying fibre-channel capacity.

peak (mainframe) z196 at 2M IOPS with 104 FICON channels, 14 storage subsystems and 14 "system assist processors"
ftp://public.dhe.ibm.com/common/ssi/ecm/en/zsw03169usen/ZSW03169USEN.PDF

references that 14 system assist processors peak is 2.2M SSCH/sec, all running at 100% processor busy ... but recommends keeping SAPs at 70% or less (1.5M SSCH/sec).

note that in addition to the enormous inefficiency introduced by mainframe FICON layer (on top of underlying fibre-channel standard), much of mainframe business processing also utilize "CKD" disks ... something that hasn't been manufactured for decades ... current "CKD" disks are another emulation layer built on top of industry standard fixed-block disks (further reducing throughput compared to directly using underlying native capacity).

reference to (single) fibre-channel for e5-2600 capable of over million IOPS (compared to z196 using 104 FICON channels to get 2M IOPS)
http://www.emulex.com/artifacts/0c1f55d0-aec6-4c37-bc42-7765d5d7a70e/elx_wp_all_hba_romley.pdf

fibre-channel from 1988 was designed to download complete i/o request ... significantly reducing protocol chatter and latency. IBM added FICON layer that serialize ibm mainframe channel program emulation over the underlying fibre-channel ... significantly increasing chatter and latency (and decreasing throughput). This discusses some recent enhancements for FICON ... somewhat recreating some of the underlying original 1988 fibre channel characteristics
http://www-03.ibm.com/systems/z/hardware/connectivity/ficon_performance.html

while (max) ZEC12 documentation lists it as 50percent more processor than (max) Z196 (75BIPS compared to 50BIPS), the documentation states (max) ZEC12 DBMS throughput is only 30% more than (max) Z196 (seemingly more of a I/O issue than processor issue).

recent posts mentioning e5-2600 and/or FICON:
https://www.garlic.com/~lynn/2012.html#90 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2012d.html#28 NASA unplugs their last mainframe
https://www.garlic.com/~lynn/2012d.html#43 Layer 8: NASA unplugs last mainframe
https://www.garlic.com/~lynn/2012d.html#50 Layer 8: NASA unplugs last mainframe
https://www.garlic.com/~lynn/2012d.html#64 Layer 8: NASA unplugs last mainframe
https://www.garlic.com/~lynn/2012e.html#3 NASA unplugs their last mainframe
https://www.garlic.com/~lynn/2012e.html#4 Memory versus processor speed
https://www.garlic.com/~lynn/2012e.html#27 NASA unplugs their last mainframe
https://www.garlic.com/~lynn/2012e.html#94 Can Mainframes Be Part Of Cloud Computing?
https://www.garlic.com/~lynn/2012e.html#99 Can Mainframes Be Part Of Cloud Computing?
https://www.garlic.com/~lynn/2012e.html#105 Burroughs B5000, B5500, B6500 videos
https://www.garlic.com/~lynn/2012f.html#0 Burroughs B5000, B5500, B6500 videos
https://www.garlic.com/~lynn/2012f.html#4 Can Mainframes Be Part Of Cloud Computing?
https://www.garlic.com/~lynn/2012f.html#7 Burroughs B5000, B5500, B6500 videos
https://www.garlic.com/~lynn/2012f.html#94 Time to competency for new software language?
https://www.garlic.com/~lynn/2012g.html#36 Should IBM allow the use of Hercules as z system emulator?
https://www.garlic.com/~lynn/2012g.html#38 Should IBM allow the use of Hercules as z system emulator?
https://www.garlic.com/~lynn/2012h.html#4 Think You Know The Mainframe?
https://www.garlic.com/~lynn/2012h.html#20 Mainframes Warming Up to the Cloud
https://www.garlic.com/~lynn/2012h.html#35 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012h.html#52 How will mainframers retiring be different from Y2K?
https://www.garlic.com/~lynn/2012h.html#62 What are your experiences with Amdahl Computers and Plug-Compatibles?
https://www.garlic.com/~lynn/2012i.html#11 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2012i.html#16 Think You Know The Mainframe?
https://www.garlic.com/~lynn/2012i.html#47 IBM, Lawrence Livermore aim to meld supercomputing, industries
https://www.garlic.com/~lynn/2012i.html#54 IBM, Lawrence Livermore aim to meld supercomputing, industries
https://www.garlic.com/~lynn/2012i.html#84 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2012i.html#88 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2012i.html#95 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2012j.html#1 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2012j.html#13 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2012j.html#34 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2012j.html#46 Word Length
https://www.garlic.com/~lynn/2012j.html#66 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012j.html#95 printer history Languages influenced by PL/1
https://www.garlic.com/~lynn/2012j.html#96 The older Hardware school
https://www.garlic.com/~lynn/2012k.html#41 Cloud Computing
https://www.garlic.com/~lynn/2012k.html#69 ESCON
https://www.garlic.com/~lynn/2012k.html#80 360/20, was 1132 printer history
https://www.garlic.com/~lynn/2012l.html#20 X86 server
https://www.garlic.com/~lynn/2012l.html#27 PDP-10 system calls, was 1132 printer history
https://www.garlic.com/~lynn/2012l.html#28 X86 server
https://www.garlic.com/~lynn/2012l.html#30 X86 server
https://www.garlic.com/~lynn/2012l.html#34 X86 server
https://www.garlic.com/~lynn/2012l.html#39 The IBM zEnterprise EC12 announcment
https://www.garlic.com/~lynn/2012l.html#42 I.B.M. Mainframe Evolves to Serve the Digital World
https://www.garlic.com/~lynn/2012l.html#51 Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012l.html#56 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012l.html#81 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012l.html#87 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012l.html#88 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012l.html#90 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012l.html#100 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012m.html#2 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012m.html#3 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012m.html#4 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012m.html#5 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012m.html#13 Intel Confirms Decline of Server Giants HP, Dell, and IBM
https://www.garlic.com/~lynn/2012m.html#28 I.B.M. Mainframe Evolves to Serve the Digital World
https://www.garlic.com/~lynn/2012m.html#31 Still think the mainframe is going away soon: Think again. IBM mainframe computer sales are 4% of IBM's revenue; with software, services, and storage it's 25%
https://www.garlic.com/~lynn/2012m.html#43 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012m.html#67 How do you feel about the fact that today India has more IBM employees than any of the other countries in the world including the USA.?
https://www.garlic.com/~lynn/2012n.html#9 How do you feel about the fact that today India has more IBM employees than any of the other countries in the world including the USA.?
https://www.garlic.com/~lynn/2012n.html#13 System/360--50 years--the future?
https://www.garlic.com/~lynn/2012n.html#14 System/360--50 years--the future?
https://www.garlic.com/~lynn/2012n.html#19 How to get a tape's DSCB
https://www.garlic.com/~lynn/2012n.html#44 Under what circumstances would it be a mistake to migrate applications/workload off the mainframe?
https://www.garlic.com/~lynn/2012n.html#46 history of Programming language and CPU in relation to each
https://www.garlic.com/~lynn/2012n.html#48 Under what circumstances would it be a mistake to migrate applications/workload off the mainframe?
https://www.garlic.com/~lynn/2012n.html#50 history of Programming language and CPU in relation to each
https://www.garlic.com/~lynn/2012n.html#51 history of Programming language and CPU in relation to each
https://www.garlic.com/~lynn/2012n.html#56 Under what circumstances would it be a mistake to migrate applications/workload off the mainframe?
https://www.garlic.com/~lynn/2012n.html#69 Under what circumstances would it be a mistake to migrate applications/workload off the mainframe?
https://www.garlic.com/~lynn/2012n.html#70 Under what circumstances would it be a mistake to migrate applications/workload off the mainframe?
https://www.garlic.com/~lynn/2012n.html#72 Mainframes are still the best platform for high volume transaction processing
https://www.garlic.com/~lynn/2012o.html#11 Mainframes are still the best platform for high volume transaction processing

--
virtualization experience starting Jan1968, online at home since Mar1970

Assembler vs. COBOL--processing time, space needed

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Assembler vs. COBOL--processing time, space needed
Newsgroups: alt.folklore.computers
Date: Wed, 14 Nov 2012 09:54:19 -0500
Gerard Schildberger <gerard46@rrt.net> writes:
Yes, especially if the channels were outboard instead of inboard. Inboard channels stole "CPU" cycles for execution and memory access(es). _________________ Gerard Schildberger

re:
https://www.garlic.com/~lynn/2012o.html#19 Assembler vs. COBOL--processing time, space needed
https://www.garlic.com/~lynn/2012o.html#20 Assembler vs. COBOL--processing time, space needed
https://www.garlic.com/~lynn/2012o.html#21 Assembler vs. COBOL--processing time, space needed

low-end &mid-range 360 processors had integrated channels ... i.e. same engine that executed the 360 instruction set microcode also executed the channel microcode. 360/65(67) and above had external channels.

note that even external channels doing lots of i/o, could cause lots of memory bus interference for the processor.

undergraduate I had done the cp67 changes to support tty/ascii terminals ... and tried to make the ibm 2702 terminal controller do something that it couldn't quite do. this somewhat prompted the univ to start a clone controller effort ... reverse engineer channel interface and build channel interface controller board for interdata/3 programmed to emulate 2702 (and also do what I tried with the 2702). late four of us were written up as responsible for (some part of) the clone controller business ... some past posts
https://www.garlic.com/~lynn/submain.html#360pcm

an early bug was the controller "held" the channel ... which "held" the memory bus ... which resulted in the processor "red-light". The issue was that the 360/67 high-frequency timer "tic'ed" approx. every 13mics and needed to update the location 80 timer field. If the timer tic'ed while a previous timer memory update was still pending, the processor would stop with hardware error. the clone controller had to frequently "free" the channel (which in turned "freed" the memory bus) ... to allow for timer-tic update.

later at the science center ... the 360/67 was "upgraded" from a "simplex" to dual-processor 360/67. the multiprocessor 360/67 was unique in 360s that added a "channel controller" and multi-ported memory that provided channel interface with independent interface. the multi-ported memory interface slowed the processor memory bus cycle by 10-15% ... a single-processor "half-duplex" 360/67 (single processor with channel controller) would run that much slower for compute intensive workload. However a simplex 360/67 running 100% cpu concurrent with heavy i/o workload would have lower thruput than same workload on a half-duplex (single processor with channel controller) 360/67. misc. past posts mentioning multiprocessor (&/or compare&swap instruction)
https://www.garlic.com/~lynn/subtopic.html#smp

note that clone controller business was major motivation for the "failed" future system effort ... some past posts
https://www.garlic.com/~lynn/submain.html#futuresys

during "future system" period lots of 370 activity was killed off (and the lack of 370 products during this period is credited with giving clone processors a market foothold) ... but when FS was killed ... there was mad rush to get stuff back into the 370 product pipelines. this included the q&d 303x in parallel with 3081 using some warmed over FS technology.

370/158 had integrated channels, for 303x ... there was an external "channel director" by taking the 370/158 engine and eliminating the 370 instruction microcode. A 3031 was a 370/158 engine with just the 370 instruction microcode and a 2nd 370/158 engine with just the integrated microcode engine. A 3032 was a 370/168 reconfigured to use the 303x channel director. A 3033 was 370/168 logic remapped to 20% faster chips ... but with something like 10 times the circuits/chip. During 3033 development, some logic rework was done to take advantage of the extra circuits eventually resulting in 3033 being 50% faster than 370/158.

The 360/370 channel architecture was half-duplex, end-to-end handshaking (between channel and end-device) and designed to be "shared" with lots of concurrent device activity. The end-to-end handshaking could result in enormous "channel busy" and latency ... especially with slow controllers.

I've repeated before that common configuration for "large systems" (16 channels) tended to be both disk controllers and 3270 terminal controllers spread across the same channels. In special project for STL moving 300 people from the IMS group to off-site location, I did support for HYPERchannel operating as channel extender. HYPERchannel remote device adapters were at the remote location with local "channel-attached" 3270 controllers moved to the remote location ... and HYPERchannel adapter used on the real channels. The HYPERchannel adapters had much faster processor than the 3270 controllers ... so identical operations resulted in much lower channel busy (especially hand-shaking controller operations) ... and the move resulted in 10-15% increase in workload throughput (lower channel busy interferance with disk operations).

A similar but different was the development of the 3880 disk controller (replacing 3830) for 3mbyte 3380 disks. While the transfer rate when up by factor of ten times, the 3880 processor engine was much slower than the 3830 it replaced (significantly channel busy for control hand-shaking operations). The 3090 processor had originally configured max. number of configuration channels based on assumption that 3880 would have same channel busy control operations a 3830. When they found out how bad the 3880 controller actually was, they had to significantly increase the number of channels (in order to achieve target system throughput). This increased the number of TCMs needed to manufactur a 3090 ... a big hit to the 3090 manufacturing cost ... there was joke that 3090 group was going to bill the 3880 group for the increase in 3090 manufacturing cost. This in large part accounted for subsequent myths about the massive number of mainframe channels increasing i/o throughput (when it was actually to compensate for the implementation short-comings of half-duplex, end-to-end operation).

Fibre-channel effort originated circa 1988 at LLNL looking at standardizing was serial technology that they were using in house (this was about the same time that LANL started work on HiPPI for 100mbyte/sec standards version of half-duplex cray channel). A big push in fibre-channel was pairs of serial transfer, one dedicated for concurrent transfer in each direction. Part of the effort was fully concurrent asynchronous operation to compensate for the increasing role that end-to-end latency played as transfer rates increased. Part of this was complete transfer of request to remote end as asynchronous operation with results coming back ... minimizing the number of end-to-end operations for doing I/O (and the associated latency turn-around). reference to early jan1992 meeting doing cluster-scale-up for commercial using fibre-channel standard
https://www.garlic.com/~lynn/95.html#13
misc. past ha/cmp postings
https://www.garlic.com/~lynn/subtopic.html#hacmp

above references also wanting to reform (serial) 9333/harrier to interoperate with fibre-channel ... instead it was done as its own unique coming out as SSA. It was part of working with LLNL on fibre-channel ... as well as working with LLNL on cluster-scale-up for numeric intensive & scientific workload ... some old email on cluster-scale-up
https://www.garlic.com/~lynn/lhwemail.html#medusa

possibly only hrs after the last email above (end of jan1992), the cluster-scale-up work was transferred and we were told we couldn't work on anything with more than four processors. A couple weeks later (17Feb1992), it was announced as IBM supercomputer for numeric intensive and scientific *ONLY*. old press reference:
https://www.garlic.com/~lynn/2001n.html#6000clusters1
later that spring (11May1992), press about clusters had caught the company "by surprise" ... even though I had been working on it for some time with various customers.
https://www.garlic.com/~lynn/2001n.html#6000clusters2

The later IBM mainframe FICON effort effectively layered the standard half-duplex characteristic on-top of fibre-channel standard ... carrying with it all the throughput limitations of the high number of end-to-end operations and associated latencies (drastically cutting the throughput of the FICON layer compared to that of the underlying fibre-channel).

--
virtualization experience starting Jan1968, online at home since Mar1970

Does the IBM System z Mainframe rely on Security by Obscurity or is it Secure by Design?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Date: 14 Nov 2012
Subject: Does the IBM System z Mainframe rely on Security by Obscurity or is it Secure by Design?
Blog: Enterprise Systems
re:
http://lnkd.in/CF8T3s
and
https://www.garlic.com/~lynn/2012m.html#10 Does the IBM System z Mainframe rely on Security by Obscurity or is it Secure by Design
https://www.garlic.com/~lynn/2012m.html#12 Does the IBM System z Mainframe rely on Security by Obscurity or is it Secure by Design
https://www.garlic.com/~lynn/2012m.html#24 Does the IBM System z Mainframe rely on Security by Obscurity or is it Secure by Design

"Account fraud" form of identity theft is a major criminal motivation (be able to perform fraudulent financial transactions against existing account) ... a major source of information for account fraud had come from institution data breaches. Past statistics have insiders involved in at least 70% of such activity.

We were tangentially involved in the cal. data breach notification legislation. those driving the legislation had done detailed public privacy surveys and "account fraud" had come up as #1 item ... primarily resulting from data breaches ... and there was little or nothing being done about it. Risk was primarily to the account holders, not the institutions with the breaches, typically major motivation for security and criminal countermeasures are self-protection ... since the institutions had little at risk, there was little motivation to do anything. There was some hope that publicity from breach notification might motivate institutions to take corrective action.

In the more than a decade since the cal. data breach notification ... there have been some industry standards for security certification related to breaches ... the major certification criteria appears to be not having a breach ... jokes about nearly all breaches now are at certified institutions .... which have their certification revoked after a breach. The industry certification standards have also been used as part of justifying proposed federal legislation that would eliminate notifications.

More recently, "account fraud" has been in the news involving commercial accounts ... with no reg-e reimbursement regulation (like for personal/individual) ... the details are effectively the same ... except the sums are typically much larger (and fewer numbers of accounts).

We've periodically noted that the account information sufficient for fraudulent transactions is also needed in dozens of business processes at millions of locations around the planet ... and so even if the planet was buried under miles of information hiding encryption ... it still is unlikely to stop the leakage. I did co-author a financial industry standard that slightly tweaked the existing paradigm ... making that account information unusable by crooks for fraudulent transactions. It didn't do anything about preventing breaches, but it eliminated the motivation for crooks performing the majority of breaches. It does enormously decrease the "attack surface" (that needs to be defended from criminals) by several orders of magnitude ... freeing up security resources to be concentrated on drastically reduced number of vulnerabilities.

--
virtualization experience starting Jan1968, online at home since Mar1970

Assembler vs. COBOL--processing time, space needed

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Assembler vs. COBOL--processing time, space needed
Newsgroups: alt.folklore.computers
Date: Thu, 15 Nov 2012 11:04:14 -0500
Peter Flass <Peter_Flass@Yahoo.com> writes:
Yes, but the discussion was about comparing emulated 360s to real ones (I believe), so you'd want to try to keep the outboard parts the same as much as possible.

re:
https://www.garlic.com/~lynn/2012o.html#19 Assembler vs. COBOL--processing time, space needed
https://www.garlic.com/~lynn/2012o.html#20 Assembler vs. COBOL--processing time, space needed
https://www.garlic.com/~lynn/2012o.html#21 Assembler vs. COBOL--processing time, space needed
https://www.garlic.com/~lynn/2012o.html#22 Assembler vs. COBOL--processing time, space needed

problem is the outboard parts haven't been built for decades ... 360s would be 2311 & 2314 disks and terminals would be tty33, tty35, 2741, 1052 ... maybe a couple 2260 displays.

channels would mostly be integrated (aka processor engine executing both the 360 instruction microcode and the integrated channel microcode) ... didn't get separate channels until 360/65.

then you have 2311 & 2314 disk controllers ... 2702 terminal controllers, etc.

here is old comparison of 360/67 running cp67 with similar cms workload to vm370 running on 3081.
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door

I had been making such statements about relative system throughput decline of disk technology since the later half of the 70s. The disk division executives took exception with the above and assigned the division performance group to refute the comparison ... after a couple weeks they came back and essentially said I had slightly understated the situation. old posts with reference to the analysis
https://www.garlic.com/~lynn/2001l.html#46 MVS History (all parts)
https://www.garlic.com/~lynn/2006f.html#3 using 3390 mod-9s

part of the discussion ... that continues today ... is that the processor rate has gotten substantially faster than other components ... today's delay for a cache miss (access to memory), when delay is measured in number of processor cycles ... is comparable to to 360 disk access delay (when disk access is measured in number of 360 processor cycles). this trend has resulted in various approaches allowing current processors to switch to other work while waiting for cache-miss ... comparable to multitasking approach for 360s. Today is hyperthreading (additional instruction streams simulating multiple processors) and out-of-order execution.

Also most processor caches are now (substantially) larger than total 360 real storage (and main memory now larger than 360 disk storage)

I've recently noted that for few generations of i86 are really risc cores with hardware layer translating i86 instructions into risc micro-ops ... helping to moderate the throughput advantage that risc has had over i86 ... things like risc has had out-of-order executation for decades.

big part of z196 processor throughput increase over z10 is supposedly the introduction of out-of-order execution ... with further out-of-order refinements done for zec12
2005 Z9 17.8BIPS 54processor 330MIPS/processor 2008 Z10 30BIPS 64processor 460MIPS/processor 2010 Z196 50BIPS 80processor 625MIPS/processor 2012 zec12 75BIPS 101processor 743MIPS/processor

as aside ... the above referenced post from 1993 (regarding 67/3081 comparison from the early 80s) was in the period that mainframe channel engineers were appearing at fibre-channel meetings and pushing their higher layer that would eventually become FICON (FICON layer resulting in drastically reduced throughput compared to the underlying fibre-channel).

--
virtualization experience starting Jan1968, online at home since Mar1970

Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
Newsgroups: bit.listserv.ibm-main
Date: 15 Nov 2012 10:16:29 -0800
jwglists@GMAIL.COM (John Gilmore) writes:
Lynn's most recent response is unsatisfactory, in substance evasive.

Let us for the sake of the argument stipulate, though this is not usually the case, that some "non-mainframe server" can perform some single I/O operation faster than some mainframe.

It turns out that this stipulation does not much help Lynn's argument.

Mainframes handle aggregate I/O workloads, comprised of many single I/O operations, faster and with much less CP involvement than any "non-mainframe server". CPU involvement is much lower, and many I/O operations are handled concurrently. The channels, which for some reason Lynn seems to want to disparage, do most of the work.

Mark Post's point nevertheless remains crucial. Every case is indeed different. There are single applications that, particularly when they are considered in isolation, are easy enough to accomplish on a "non-mainframe" server. It is when many such applications are aggregated together that the mainframe comes into its own as an alternative, a highly attractive one, to server farms.


recent discussion in a.f.c. about fibre-channel standard (work started 1988) ... in the 90s, some POK mainframe channel engineers started to participate ... working on layering mainframe channel conventions on top of fibre-channel ... which drastically cuts the throughput (compared to underlying fibre-channel) ... and eventually turns into FICON.
https://www.garlic.com/~lynn/2012o.html#24 Assembler vs. COBOL--processing time, space needed

in the past couple years ... there has been some work on FICON with introduction of TCW & zHPF to coming a little closer to the underlying fibre-channel throughput (looks to give FICON about factor of three times improvement).

recent posts mentioning TCW enhancement to FICON
https://www.garlic.com/~lynn/2012m.html#4 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012m.html#5 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012m.html#11 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012m.html#13 Intel Confirms Decline of Server Giants HP, Dell, and IBM
https://www.garlic.com/~lynn/2012m.html#28 I.B.M. Mainframe Evolves to Serve the Digital World
https://www.garlic.com/~lynn/2012m.html#43 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012n.html#19 How to get a tape's DSCB
https://www.garlic.com/~lynn/2012n.html#44 Under what circumstances would it be a mistake to migrate applications/workload off the mainframe?
https://www.garlic.com/~lynn/2012n.html#51 history of Programming language and CPU in relation to each
https://www.garlic.com/~lynn/2012n.html#70 Under what circumstances would it be a mistake to migrate applications/workload off the mainframe?
https://www.garlic.com/~lynn/2012n.html#72 Mainframes are still the best platform for high volume transaction processing

IBM has z196 benchmark with peak of 2m IOPS with 104 FICON channels, 14 storage subsystems, and 14 system assist processors. It mentions that the 14 SAPs are capable of peak 2.2m SSCH/sec running at 100% cpu busy, but recommends SAPs run at 70% or less (i.e. 1.5m SSCH/sec).

there is also a recent emulex announcement single fibre-channel for e5-2600 capable of over one millions IOPS (compared to z196 peak of 2m IOPS using 104 FICON channels)

other aside, lots of past posts getting to play (IBM) disk engineer in bldgs. 14&15 ... and working on mainframe channel & disk thruput
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

Why bankers rule the world

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Date: 15 Nov 2012
Subject: Why bankers rule the world
Blog: Facebook
re:
https://www.facebook.com/lynn.wheeler/posts/230680310396062

Why bankers rule the world
http://www.atimes.com/atimes/Global_Economy/NK14Dj01.html

"interest" may be anything that is skimmed off by the financial industry. Traditional loans & mortgages moved from insured depository institutions making profit off loan payments to CDOs ... with the parties focused on enormous fees and commissions. There was $27T in triple-A rated toxic CDOs done during the economic mess ... with around aggregate 15%-20% fees&commissions, that comes to at least $4T skimmed off. That would account for the claim that the financial industry tripled in size (as percent of GDP) during the bubble (nearly all the parties involved no longer cared about loan quality or borrower's qualifications ... focused on making loans as big as possible and as fast as possible).
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
https://www.bloomberg.com/news/articles/2008-10-27/evil-wall-street-exports-boomed-with-fools-born-to-buy-debt

retail processed food has had ever decreasing share actually going to the food producers for some time. however the enormous sucking sound of trillions disappearing into the financial industry during the last decade in the economic mess, inflating them to three times their size at the start of the century, is newer ... and when the rest of the economy collapsed and financial industry started hemorrhaging ... the gov. stepped in to keep them from returning to their earlier size (along with slight of hand renaming performance bonuses to retention bonuses ... so they continue to get billions in bonuses for just showing up; for example, GS goes into the red in 2008, gets $10B gov. bailout and pays out $10B)

for the fun of it "In financial ecosystems, big banks trample economic habitats and spread fiscal disease":
http://www.sciencedaily.com/releases/2012/11/121114134658.htm
and "Bill Black: Naked Capitalism Still Rules":
http://www.nakedcapitalism.com/2012/11/bill-black-naked-capitalism-still-rules.htm

another big banks trample economic habitats
http://www.phibetaiota.net/2012/11/berto-jongman-princeton-foucs-on-financial-ecosystems-finds-that-big-banks-trample-economic-habitats-and-spread-fiscal-disease/

recent posts mentioning $27T in triple-A toxic CDOs:
https://www.garlic.com/~lynn/2012.html#21 Zombie Banks
https://www.garlic.com/~lynn/2012.html#32 Wall Street Bonuses May Reach Lowest Level in 3 Years
https://www.garlic.com/~lynn/2012b.html#19 "Buffett Tax" and truth in numbers
https://www.garlic.com/~lynn/2012b.html#65 Why Wall Street Should Stop Whining
https://www.garlic.com/~lynn/2012b.html#95 Bank of America Fined $1 Billion for Mortgage Fraud
https://www.garlic.com/~lynn/2012c.html#30 US real-estate has lost $7T in value
https://www.garlic.com/~lynn/2012c.html#31 US real-estate has lost $7T in value
https://www.garlic.com/~lynn/2012c.html#32 US real-estate has lost $7T in value
https://www.garlic.com/~lynn/2012c.html#36 US real-estate has lost $7T in value
https://www.garlic.com/~lynn/2012c.html#37 US real-estate has lost $7T in value
https://www.garlic.com/~lynn/2012c.html#38 The Death of MERS
https://www.garlic.com/~lynn/2012c.html#45 Fannie, Freddie Charge Taxpayers For Legal Bills
https://www.garlic.com/~lynn/2012c.html#46 PC industry is heading for more change
https://www.garlic.com/~lynn/2012c.html#54 PC industry is heading for more change
https://www.garlic.com/~lynn/2012c.html#55 PC industry is heading for more change
https://www.garlic.com/~lynn/2012d.html#32 PC industry is heading for more change
https://www.garlic.com/~lynn/2012d.html#42 China's J-20 Stealth Fighter Is Already Doing A Whole Lot More Than Anyone Expected
https://www.garlic.com/~lynn/2012e.html#23 Are mothers naturally better at OODA because they always have the Win in mind?
https://www.garlic.com/~lynn/2012e.html#40 Who Increased the Debt?
https://www.garlic.com/~lynn/2012e.html#42 Who Increased the Debt?
https://www.garlic.com/~lynn/2012e.html#58 Word Length
https://www.garlic.com/~lynn/2012f.html#31 Rome speaks to us. Their example can inspire us to avoid their fate
https://www.garlic.com/~lynn/2012f.html#63 One maths formula and the financial crash
https://www.garlic.com/~lynn/2012f.html#66 Predator GE: We Bring Bad Things to Life
https://www.garlic.com/~lynn/2012f.html#69 Freefall: America, Free Markets, and the Sinking of the World Economy
https://www.garlic.com/~lynn/2012f.html#75 Fed Report: Mortgage Mess NOT an Inside Job
https://www.garlic.com/~lynn/2012f.html#80 The Failure of Central Planning
https://www.garlic.com/~lynn/2012f.html#87 How do you feel about the fact that India has more employees than US?
https://www.garlic.com/~lynn/2012g.html#6 Adult Supervision
https://www.garlic.com/~lynn/2012g.html#7 Adult Supervision
https://www.garlic.com/~lynn/2012g.html#8 Adult Supervision
https://www.garlic.com/~lynn/2012g.html#20 Psychology Of Fraud: Why Good People Do Bad Things
https://www.garlic.com/~lynn/2012g.html#22 Psychology Of Fraud: Why Good People Do Bad Things
https://www.garlic.com/~lynn/2012g.html#28 REPEAL OF GLASS-STEAGALL DID NOT CAUSE THE FINANCIAL CRISIS - WHAT DO YOU THINK?
https://www.garlic.com/~lynn/2012g.html#71 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012g.html#76 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012h.html#26 US economic update. Everything that follows is a result of what you see here
https://www.garlic.com/~lynn/2012h.html#32 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012h.html#63 Is this Boyd's fundamental postulate, 'to improve our capacity for independent action'?
https://www.garlic.com/~lynn/2012h.html#75 Interesting News Article
https://www.garlic.com/~lynn/2012i.html#13 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012i.html#14 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012i.html#51 Is this Boyd's fundamental postulate, 'to improve our capacity for independent action'? thoughts please
https://www.garlic.com/~lynn/2012j.html#28 Why Asian companies struggle to manage global workers
https://www.garlic.com/~lynn/2012j.html#65 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012k.html#43 Core characteristics of resilience
https://www.garlic.com/~lynn/2012k.html#75 What's the bigger risk, retiring too soon, or too late?
https://www.garlic.com/~lynn/2012l.html#64 Singer Cartons of Punch Cards
https://www.garlic.com/~lynn/2012m.html#56 General Mills computer
https://www.garlic.com/~lynn/2012n.html#6 General Mills computer
https://www.garlic.com/~lynn/2012n.html#12 Why Auditors Fail To Detect Frauds?
https://www.garlic.com/~lynn/2012o.html#7 Beyond the 10,000 Hour Rule

--
virtualization experience starting Jan1968, online at home since Mar1970

Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
Newsgroups: bit.listserv.ibm-main
Date: 16 Nov 2012 09:56:45 -0800
re:
https://www.garlic.com/~lynn/2012o.html#25 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee

attached is item from today I did in linkedin mainframe ... work I had done for channel extender in 1980 then also start to show up for fibre-channel in the late 80s. Then nearly 30yrs later (after the 1980 work), TCW does some similar stuff for FICON ... resulting in about three times improvement over original FICON ... and starting to come a little closer to the base fibre-channel in throughput.

Part of the issue is that as bit transfer rate increases ... say go from about 16mbits/sec to 16gbits/sec ... a factor of thousand times ... w/o a corresponding increase in block size (say from 4kbyte to 4mbytes) ... transfer time per block decreases by a factor of thousand and end-to-end latency begins to play increasingly significant role. original 1988 fibre-channel started to address with totally asynchronous operation (between outgoing commands packages and returning results).

the attached references both faster controller reduces channel busy and (implies) download channel packages as single operation to remote end as part of channel extender and latency compensation (reduce/eliminate number of end-to-end operations & latency for commands and data transfer i.e. somewhat TCW implementation for FICON).

a variation on the channel busy shows up with the extremely slow processor in the 3880 controller and 3090 channels. The 3880 channel busy time for disk operation was significantly larger than anticipated by the 3090 group ... to compensate they had to significantly increase the number of channels ... which added TCM and increased manufacturing costs (there were jokes about 3090 group billing the 3880 group for the increased 3090 manufacturing cost ... so it came out of 3880 profit margin, not 3090 profit margin). The significant increase in 3090 channels (to compensate for slow 3880 processor and high channel busy) contributed to the myth of all those channels contributing/responsible for higher aggregate I/O throughput (they did, but not in the sense that marketing was trying to imply).

from linkedin mainframe:

thornton & cray did the cdc6600 ... cray went on to found cray research and thornton went on to found network systems (along with a couple other cdc engineers) ... which produced HYPERchannel. In 1980, IBM's STL was bursting at the seams and they decided to move 300 from the IMS group to off-site building. The IMS group had tested remote 3270 support (back into STL datacenter) but found it totally unacceptable (they were use to vm/cms local channel attached 3270s ... significantly better than mvs 3270 of any kind). I got dragged into doing HYPERchannel "channel-extender" support utilizing collins digital radio microwave link. This turned out to provide 3270 response indistinguishable from what they were use to (and since the HYPERchannel A220 adapters had lower channel busy than 3270 controllers for identical operations, some of the STL datacenter 370/168s performance improved). This somewhat resulted in becoming the corporate HYPERchannel expert ... for both internal as well as customer installations. I got to go to Network Systems hdqtrs periodically and meet with their top people.

Not long afterwards, the FE IMS support group in Boulder was being moved to a new building and were faced with similar options and also chose HYPERchannel channel extender. In that case, optical infrared modems were chosen situated on the roofs of the two buildings. There was some concern of signal loss during winter snow storms ... but the only significant case was small increase in bit-error rate during a "white-out" storm when employees weren't able to get into work.

later I did the IBM TCP/IP product enhancements for RFC1044 support. The standard product got about 44kbytes/sec throughput using nearly full 3090 processor. In some tuning tests of RFC1044 support at Cray Research, I got sustained channel speed throughput between 4341 and Cray ... using only modest amount of 4341 processor (about 500 times improvement in the bytes moved per instruction executed).

...

other past posts in this thread:
https://www.garlic.com/~lynn/2012l.html#56 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012l.html#57 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012l.html#59 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012l.html#70 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012l.html#81 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012l.html#87 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012l.html#88 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012l.html#90 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012l.html#100 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012m.html#2 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012m.html#3 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012m.html#4 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012m.html#5 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012m.html#6 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012m.html#7 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012m.html#11 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM mainframe evolves to serve the digital world

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Date: 18 Nov 2012
Subject: IBM mainframe evolves to serve the digital world.
Blog: LinkedIn
in:
http://lnkd.in/JR_DNM
also
http://lnkd.in/RsuUyw
http://lnkd.in/NBbbzr

reference:
https://www.garlic.com/~lynn/2012o.html#25 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012o.html#27 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee

When I moved to San Jose Research (on main plant site), they let me wander around the disk plant site. Bldg. 14 (dasd engineering lab) had a lot of mainframes for dasd testing. They were pre-scheduled 7x24 around the clock, for stand-alone testing. They had once tried MVS for anytime, on-demand concurrent testing ... but found that MVS (in that environment) had 15min MTBF (requiring re-ipl) ... even with only a single testcell. I offered to rewrite I/O supervisor to make it bullet proof and never fail ... for supporting anytime, on-demand, concurrent testing, greatly improving productivity. Later I did internal report describing the work and mentioning the MVS 15min MTBF, bringing down the wrath of MVS group on my head which continued the rest of my career at the company (they would have had me fired if they could figure out how).

One monday morning, I got a call about that I had changed my software and totally degraded their 3033 system. Total testcell testing typically used 2-3% of cpu ... so the engineers found 3830 controller and 16 3330 drives and were running their own internal online service on the machine. The response and throughput that morning had degraded several hundred percent. I spent some time diagnosing slowdown and it eventually turns out it wasn't my software ... some engineer had swapped in an engineering 3880 for the 3830 ... and the enormously slower (jib-prime) processor in the 3880 was resulting in all sorts of system overhead and slowdown. This was still six months before first customer ship ... so it gave some time to come out for some work arounds for somewhat mitigate part of the problems. These type of thing would periodically happen ... getting me dragged into doing disk engineering and design ... they even started insisting that I sit in on conference calls with POK channel engineers. misc. past posts about getting to play disk engineer in bldgs. 14&15
https://www.garlic.com/~lynn/subtopic.html#disk

original 3880 controller work was late 70s ... so there were some number of hardware fixes that got done over the next decade. However, starting with the original hardware, one of the early issues was that the processor was so slow ... one of the mitigation efforts was to preload stuff into the controller for a particular channel interface. If an I/O request came in from a different channel interface ... it took milliseconds extra to dump & reload for the change channel interface. I had done some superfast redrive I/O supervisor when same processor had multiple channel paths to the same controller/disks ... effectively load balancing instead of alternate path (which tended to always try and drive on the same channel interface). Because of the extra milliseconds for 3880 to switch channel interface ... load balancing implementation ran significantly worse than alternate path.

at some point, somebody dropped an early multi-pathing architecture document across my desk for review. I realized I could drastically improve multi-pathing efficiency and hardware requirements and wrote up an alternative architecture that only slightly changed the software interface. response I got back was that it was too late to change the architecture (didn't appear that the original architecture had involved anybody from the engineering group).

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframes are still the best platform for high volume transaction processing

From: Anne & Lynn Wheeler <lynn@garlic.com>
Date: 18 Nov 2012
Subject: Mainframes are still the best platform for high volume transaction processing
Blog: Mainframe Experts
re:
http://lnkd.in/RsuUyw
and
https://www.garlic.com/~lynn/2012n.html#72 Mainframes are still the best platform for high volume transaction processing
https://www.garlic.com/~lynn/2012o.html#6 Mainframes are still the best platform for high volume transaction processing
https://www.garlic.com/~lynn/2012o.html#11 Mainframes are still the best platform for high volume transaction processing

TPC (transaction processing) benchmarks leader right now is large oracle. IBM response in the press has been purescale ... old post mentioning purescale/exadata "From the Annals of Release No Software Before Its Time"
https://www.garlic.com/~lynn/2009p.html#43

IBM TPC benchmarks have been intel & power ... see upthread reference.

For mainframe, IBM will mention percent difference between different mainframe hardware ... aka max zEC12 will do 30% more DBMS than max z196 (even tho it lists max zEC12 processing is 50% more than max z196, in part because of going from 80 processors to 101 processors) ... but seems to avoid absolute industry standard benchmarks (as it does for its intel & power platforms) allowing comparison with other platforms.

--
virtualization experience starting Jan1968, online at home since Mar1970

Regarding Time Sharing

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Regarding Time Sharing
Newsgroups: bit.listserv.ibm-main
Date: 18 Nov 2012 09:39:21 -0800
mike@MENTOR-SERVICES.COM (Mike Myers) writes:
I said that once the think time expires, the TSO user's address space is swapped out (physically), but only if there is a need to use its main storage pages to satisfy the needs of other address spaces. Until that time expires and the storage need arises, the address space remains logically swapped out (as long as they remain "not-ready"). Becoming ready while logically swapped causes them to transition from logically swapped out to swapped in.

In today's systems with large main storage, an address space may remain logically swapped out for an indefinite period of time. In today's world, think time (a setting found in the IEAOPTxx member of the system parameter library - xxx.PARMLIB) is still used to determine when a logically swapped out address space becomes a "candidate" for a physical swap out. Again, that only happens if there is a need to take those pages away for someone else.


MFT & MVT conventions was very pointer-passing API oriented. The transition from MVT to OS/VS2 release 1 ... SVS ... was fairly straight-forward ... since it was single virtual address space ... all pointers were still in the same address space.

The biggest issue was channel programs were built by library code in application space and passed to the supervisor via EXCP/SVC0 ... these were now being built with virtual addresses and the channel requires channel programs to have real addresses. The initial implementation involved borrowing CCWTRANS from (virtual machine) CP67 (precursor to vm370 that ran on 360/67).

I had rewritten cp67 virtual memory operation as undergraduate in the 60s ... and was asked to visit and talk about what was being done for SVS virtual memory operations. I pointed out several things that they were doing as flat-out wrong ... but they insisted on going ahead anyway.

In the transition from SVS to MVS ... each application was given its own virtual address space ... but an image of the MVS kernel was included (taking 8mbytes of the 16mbytes virtual address space), in large part because of the pointer-passing API ... called routine/service needing to access the parameter list pointed to by the pointer. However, numerous subsystem services that were outside the kernel ... now also had their own virtual address space ... and also relied on pointer-passing API from applications (now in different address space). In order to support subsystems accessing the application parameters, the common segment was created ... initially a 1mbyte area that resided in every address space ... in which applications would stuff parameters and being able to exchange with subsystems in different address space. As systems grew, requirements for common segment space outgrew one megabyte ... becoming CSA (common system area). Late 370 period, large customer installations would have 4-5mbyte CSA threatening to increase to 5-6mbytes ... restricting application to only 2mbytes (out of 16mbytes). To partially mitigate some of this ... some of the 370-xa architecture was retrofitted to 3033 as "dual-address space" ... allowing a called subsystem (in separate adddress space) to access calling application address space.

Early 1980s, somebody that got award for correcting several virtual memory problems in MVS (still there from original SVS), contacted me about retrofitting the fixes to vm370. I commented that I had not done them the wrong way since undegradudate in the 60s (rewriting cp67) ... so he was out of luck getting award to fixing vm370 also.

About this time, corporate had designated vm370/cms as the official strategic interactive computing solution (in part because of the huge explosion in number vm/4300 systems) ... causing convern by the TSO product administrator. He contacts me about possibly being able to help by porting my vm370 dynamic adaptive resource manager (also originally done as undergraduate in the 60s for cp67) to MVS. I point out that MVS has numerous significant structural problems affecting TSO response & throughput ... which just putting in my resource manager wouldn't help. A big issue was multi-track search ... especially for PDS directory lookup ... which could lock out channel, controller, & drive for 1/3rd of second at a time.

I had been called into number of internal and customer MVS installation accounts where the performance problem turned out to be PDS directory multi-track search. Trivial example was internal account with both MVS and VM370 machines with "shared" dasd controllers ... but with operational guideline that MVS packs are never mounted on vm370 controller strings. One day it accidentally happened and within five minutes vm370 users were making irate calls to datacenter ... the MVS channel program strings were locking out VM370/CMS users, severely degradding their performnace. MVS operations were asked to move the pack but refused. So the VM370 group put up a enormously optimized virtual VS1 system with its pack on MVS string ... which brought the MVS system to its knees and significantly reduced the MVS impact on CMS I/O activity. (MVS operations agreed to move the MVS pack off the vm370 string as long as vm370 group never put up the VS1 pack on their MVS string).

misc. old email mentioning TSO product administrator asking me to port my VM370 dynamic adaptive resource manager to MVS
https://www.garlic.com/~lynn/2006b.html#email800310
https://www.garlic.com/~lynn/2006v.html#email800310b

As an aside, the TSO product administrator obviously didn't know that at the time, the MVS group appeared to be trying to get me fired. I had wandered into the disk engineering lab and noted that they were doing stand-alone, dedicated, 7x24, around the clock scheduled test time on their mainframes. They previously had tried to use MVS for anytime, on-demand, concurrent testing ... but in that environment MVS had 15min MTBF (requiring re-ipl). I offered to rewrite I/O supervisor to make it bullet-proof and never fail ... supporint concurrent, anytime, on-demand testing ... greatly improving their thruoughput. I then wrote up internal report on the work and happen to mention the MVS 15min MTBF, bringing down the wrath of the MVS group on my head (which would periodically seem to reappear during the rest of my career). misc. past posts mentioning getting to play disk engineer in bldgs. 14&15
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

Regarding Time Sharing

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Regarding Time Sharing
Newsgroups: bit.listserv.ibm-main
Date: 18 Nov 2012 10:29:40 -0800
re:
https://www.garlic.com/~lynn/2012o.html#30 Regarding Time Sharing

oh, old presentation at fall atlantic share meeting in 1968
https://www.garlic.com/~lynn/94.html#18

on MFT14 & CP67. CP67 had been installed in the univ last week of Jan68. Univ. continued to run OS/360 (in 360/65 mode on 360/67) production during the period. I was the support person for os/360 ... but did get to play with cp67 some on the weekends.

most of the presentation was about cp67 pathlength rewrite during spring & summer 1968 (before I really got in to totally rewriting virtual memory and dynamic adaptive resource management). it does reference that I had completely redone os/360 sysgen ... hand reordering much of the stuff to achieve optimal disk operation (and while I was at it, changing it so it could be done in the product jobstream)

standard sysgen out of the box took a over 30 seconds elapsed time (with hasp, before hasp, it was well over minute) for typical univ. student job workload (3step fortgclg, before watfor). This was mostly job scheduler & allocation doing approx. 10seconds of loading linklib and svclib members per job step. Careful sysgen reorder got it down to 12.9 seconds elapsed time (note quite three times faster).

the re-order accomplished two things 1) it group the highest used linklib & svclib members together at the start of PDS dataset ... minimize arm seek between PDS directory lookup and loading member and 2) placed the highest used members at the front of the PDS directory. full cylinder 2314 PDS directory search took 20 revolutions at 2000rev/min .. well over half-second elapsed time during which time channel, controller, and disk were locked out. Even half-cylinder pds directory multi-track search would take 1/3rd second (per member load). Making sure that the highest used members were on the first track or two of the PDS directory ... could reduce PDS directory member lookup to 40-50 mills. 2314 reference from bitsaver
http://www.bitsavers.org/pdf/ibm/dasd/A26-3599-4_2314_Sep69.pdf

past posts on dynamic adaptive resource management
https://www.garlic.com/~lynn/subtopic.html#fairshare
past posts on virtual memory management
https://www.garlic.com/~lynn/subtopic.html#wsclock
past posts on getting to play disk engineer
https://www.garlic.com/~lynn/subtopic.html#disk
past posts on ckd dasd, multi-track search, fba (including being told that even if I gave MVS fully integrated & tested FBA support, i still needed an additional $26M new business profit)
https://www.garlic.com/~lynn/submain.html#dasd

--
virtualization experience starting Jan1968, online at home since Mar1970

Does the IBM System z Mainframe rely on Obscurity or is it Security by Design?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Date: 18 Nov 2012
Subject: Does the IBM System z Mainframe rely on Obscurity or is it Security by Design?
Blog: Enterprise Systems
re:
http://lnkd.in/CF8T3s
and
https://www.garlic.com/~lynn/2012m.html#10 Does the IBM System z Mainframe rely on Security by Obscurity or is it Secure by Design
https://www.garlic.com/~lynn/2012m.html#12 Does the IBM System z Mainframe rely on Security by Obscurity or is it Secure by Design
https://www.garlic.com/~lynn/2012m.html#24 Does the IBM System z Mainframe rely on Security by Obscurity or is it Secure by Design
https://www.garlic.com/~lynn/2012o.html#23 Does the IBM System z Mainframe rely on Security by Obscurity or is it Secure by Design

... sorry finger slip ... typed boith guestner &/or guerstner several times ... i.e. ... also archived here:
https://www.garlic.com/~lynn/2012m.html#24

references to this earlier post in (linkedin) IBMers' "How do you feel about the fact that today India has more IBM employees than US?")
https://www.garlic.com/~lynn/2012g.html#82

references (which has big part on Gerstner's resurrection of IBM)
https://www.amazon.com/Strategic-Intuition-Creative-Achievement-Publishing-ebook/dp/B0097D773O/
ibm system mag profile on me (although they muddled some details)
https://web.archive.org/web/20190524015712/http://www.ibmsystemsmag.com/mainframe/stoprun/Stop-Run/Making-History/
references Gerstner in competition for next CEO of AMEX ... and Gerstner "wins"
http://www.counterpunch.org/2012/05/15/why-jamie-dimons-2-billion-gambling-loss-will-not-speed-financial-reform/
AMEX Shearson
https://en.wikipedia.org/wiki/Shearson
was in competition with KKR
https://en.wikipedia.org/wiki/Kohlberg_Kravis_Roberts
to acquire RJR
https://en.wikipedia.org/wiki/RJR_Nabisco
KKR then hires away Gerstner to run RJR ... before IBM board hires him away to resurrect IBM (note linkedin rewrites URL and can drop the period after the JR)
https://en.wikipedia.org/wiki/Louis_V._Gerstner,_Jr.
Gerstner then goes on to be chairmane of another large private equity company
https://en.wikipedia.org/wiki/Carlyle_Group

Note that slightly before IBM board hires Gerstner to resurrect IBM ... AMEX spins off large transaction processing business as "First Data" (claims largest IPO up until that time; also some really large mainframe datacenters ... one had 40+ mainframe CECs each at $30M+ ... constantly being upgrades, none older than 18months) ... the IBM system mag profile has me there as chief scientist the first part of the century. KKR then does reverse IPO of First Data (claims to have been the largest reverse IPO up until that time ... after having been the largest IPO 15yrs earlier)
https://en.wikipedia.org/wiki/First_Data

gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity

--
virtualization experience starting Jan1968, online at home since Mar1970

360/20, was 1132 printer history

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360/20, was 1132 printer history
Newsgroups: alt.folklore.computers
Date: Sun, 18 Nov 2012 22:43:28 -0500
hancock4 writes:
That/'s correct--the 1401 was a separate machine and not connected to the 709x. But the IBM history says one of the original intentions for the 1401 was to serve the big 70xx machines in that way in addition to being a standalone computer for smaller organizations.

As someone mentioned, the 1401's printer and card reader were much faster than the default units (407) on the 709x.

The IBM history also says that the 1401 could be used as a front-end editor for Fortran programs or data files heading to the larger machine. It could also do the formatting of a raw output file to be printed.

Today, some peoplle write a COBOL program to extract information from various sources for a report, but the COBOL program only prepares a flat file of output records. This file is sent to a 4GL (like Easytrieve Plus or perhaps to a PC and Excel) which formats the actual report.


univ. had 709 and 1401 ... student jobs ran on 709 tape-to-tape fortran. 1401 was used card->tape that were then moved from 1401 to 709 ... then tape moved back from 709 to 1401 and tape->printer/punch. student jobs ran in second or two.

univ. got a 360/30 replacement for 1401 as part of transition from 709/1401 to 360/67 supposedly running tss/360. i got student job doing port of the 1401 MPIO program (tape<->unit-record) to 360/30 (possibly was part of learning 360 ... since the 1401 MPIO could be run directly on 360/30 in 1401 hardware emulation).

that eventually evolves into becoming part-time undergraduate and fulltime datacenter employee responsible for os/360 (tss/360 never made it to any real production use ... so when 360/67 arrived, it ran os/360 in 360/65 mode).

three people from science center came out last week jan1968 and installed (virtual machine) cp67 at the univ. the univ. continued to run os/360 production ... but i could play with cp67 some on weekend ... along with doing os/360 work. old post with part of presentation made fall 68 SHARE (ibm user group meeting) in atlantic city.
https://www.garlic.com/~lynn/94.html#18

recent post discussing more of that in detail (in ibm-main mailing list)
https://www.garlic.com/~lynn/2012o.html#31 Regarding Time Sharing
previous post in that thread:
https://www.garlic.com/~lynn/2012o.html#30 Regarding Time Sharing

along the way, I also did tty/ascii terminal support for cp67 ... and then implemented (both 2741&tty/ascii) terminal support and editor (redoing cms edit syntax from scratch) in HASP at MVT18 ... removing the 2780 (remote printer/reader/punch) support code as part of reducing the HASP real storage footprint.

misc. past posts mentioning atlantic city presentation:
https://www.garlic.com/~lynn/99.html#95 Early interupts on mainframes
https://www.garlic.com/~lynn/99.html#131 early hardware
https://www.garlic.com/~lynn/99.html#175 amusing source code comments (was Re: Testing job applicants)
https://www.garlic.com/~lynn/2000.html#55 OS/360 JCL: The DD statement and DCBs
https://www.garlic.com/~lynn/2000.html#76 Mainframe operating systems
https://www.garlic.com/~lynn/2000c.html#10 IBM 1460
https://www.garlic.com/~lynn/2000c.html#20 IBM 1460
https://www.garlic.com/~lynn/2000d.html#44 Charging for time-share CPU time
https://www.garlic.com/~lynn/2000d.html#48 Navy orders supercomputer
https://www.garlic.com/~lynn/2000d.html#50 Navy orders supercomputer
https://www.garlic.com/~lynn/2000d.html#51 Navy orders supercomputer
https://www.garlic.com/~lynn/2000f.html#60 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2001.html#26 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001.html#52 Review of Steve McConnell's AFTER THE GOLD RUSH
https://www.garlic.com/~lynn/2001.html#53 Disk drive behavior
https://www.garlic.com/~lynn/2001b.html#23 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
https://www.garlic.com/~lynn/2001b.html#73 7090 vs. 7094 etc.
https://www.garlic.com/~lynn/2001f.html#2 Mysterious Prefixes
https://www.garlic.com/~lynn/2001f.html#26 Price of core memory
https://www.garlic.com/~lynn/2001g.html#22 Golden Era of Compilers
https://www.garlic.com/~lynn/2001h.html#60 Whom Do Programmers Admire Now???
https://www.garlic.com/~lynn/2001i.html#30 IBM OS Timeline?
https://www.garlic.com/~lynn/2001k.html#20 OT: almost lost LBJ tapes; Dictabelt
https://www.garlic.com/~lynn/2001k.html#37 Is anybody out there still writing BAL 370.
https://www.garlic.com/~lynn/2001l.html#5 mainframe question
https://www.garlic.com/~lynn/2001l.html#39 is this correct ? OS/360 became MVS and MVS >> OS/390
https://www.garlic.com/~lynn/2002.html#5 index searching
https://www.garlic.com/~lynn/2002.html#14 index searching
https://www.garlic.com/~lynn/2002b.html#24 Infiniband's impact was Re: Intel's 64-bit strategy
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2002c.html#45 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#50 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002f.html#53 WATFOR's Silver Anniversary
https://www.garlic.com/~lynn/2002h.html#62 history of CMS
https://www.garlic.com/~lynn/2002i.html#42 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002l.html#29 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2002m.html#3 The problem with installable operating systems
https://www.garlic.com/~lynn/2002n.html#29 why does wait state exist?
https://www.garlic.com/~lynn/2002p.html#56 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2002p.html#62 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2002q.html#29 Collating on the S/360-2540 card reader?
https://www.garlic.com/~lynn/2003b.html#44 filesystem structure, was tape format (long post)
https://www.garlic.com/~lynn/2003c.html#51 HASP assembly: What the heck is an MVT ABEND 422?
https://www.garlic.com/~lynn/2003c.html#57 Easiest possible PASV experiment
https://www.garlic.com/~lynn/2003d.html#72 cp/67 35th anniversary
https://www.garlic.com/~lynn/2003f.html#30 Alpha performance, why?
https://www.garlic.com/~lynn/2004.html#48 AMD/Linux vs Intel/Microsoft
https://www.garlic.com/~lynn/2004b.html#17 Seriously long term storage
https://www.garlic.com/~lynn/2004b.html#31 determining memory size
https://www.garlic.com/~lynn/2004b.html#47 new to mainframe asm
https://www.garlic.com/~lynn/2004b.html#53 origin of the UNIX dd command
https://www.garlic.com/~lynn/2004c.html#59 real multi-tasking, multi-programming
https://www.garlic.com/~lynn/2004d.html#10 IBM 360 memory
https://www.garlic.com/~lynn/2004f.html#6 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#39 spool
https://www.garlic.com/~lynn/2004h.html#43 Hard disk architecture: are outer cylinders still faster than inner cylinders?
https://www.garlic.com/~lynn/2004k.html#41 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004l.html#29 FW: Looking for Disk Calc program/Exec
https://www.garlic.com/~lynn/2004n.html#23 Shipwrecks
https://www.garlic.com/~lynn/2004q.html#72 IUCV in VM/CMS
https://www.garlic.com/~lynn/2005b.html#41 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#37 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005f.html#10 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005h.html#6 Software for IBM 360/30 (was Re: DOS/360: Forty years)
https://www.garlic.com/~lynn/2005j.html#54 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005k.html#8 virtual 360/67 support in cp67
https://www.garlic.com/~lynn/2005k.html#14 virtual 360/67 support in cp67
https://www.garlic.com/~lynn/2005k.html#50 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005l.html#6 SHARE 50 years?
https://www.garlic.com/~lynn/2005m.html#16 CPU time and system load
https://www.garlic.com/~lynn/2005n.html#31 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#40 You might be a mainframer if... :-) V3.8
https://www.garlic.com/~lynn/2005o.html#12 30 Years and still counting
https://www.garlic.com/~lynn/2005o.html#14 dbdebunk 'Quote of Week' comment
https://www.garlic.com/~lynn/2005o.html#35 Implementing schedulers in processor????
https://www.garlic.com/~lynn/2005p.html#27 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005q.html#34 How To Abandon Microsoft
https://www.garlic.com/~lynn/2005r.html#0 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005r.html#38 IEH/IEB/... names?
https://www.garlic.com/~lynn/2005s.html#50 Various kinds of System reloads
https://www.garlic.com/~lynn/2005t.html#8 2nd level install - duplicate volsers
https://www.garlic.com/~lynn/2005t.html#18 Various kinds of System reloads
https://www.garlic.com/~lynn/2006.html#2 Average Seek times are pretty confusing
https://www.garlic.com/~lynn/2006.html#7 EREP , sense ... manual
https://www.garlic.com/~lynn/2006.html#15 S/360
https://www.garlic.com/~lynn/2006.html#40 All Good Things
https://www.garlic.com/~lynn/2006e.html#45 using 3390 mod-9s
https://www.garlic.com/~lynn/2006h.html#57 PDS Directory Question
https://www.garlic.com/~lynn/2006m.html#25 Mainframe Limericks
https://www.garlic.com/~lynn/2006m.html#29 Mainframe Limericks
https://www.garlic.com/~lynn/2006o.html#38 hardware virtualization slower than software?
https://www.garlic.com/~lynn/2006q.html#20 virtual memory
https://www.garlic.com/~lynn/2006v.html#0 Why these original FORTRAN quirks?
https://www.garlic.com/~lynn/2006w.html#22 Are hypervisors the new foundation for system software?
https://www.garlic.com/~lynn/2006x.html#10 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006x.html#17 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2007b.html#45 Is anyone still running
https://www.garlic.com/~lynn/2007c.html#45 SVCs
https://www.garlic.com/~lynn/2007e.html#51 FBA rant
https://www.garlic.com/~lynn/2007n.html#93 How old are you?
https://www.garlic.com/~lynn/2007o.html#69 ServerPac Installs and dataset allocations
https://www.garlic.com/~lynn/2007p.html#0 The use of "script" for program
https://www.garlic.com/~lynn/2007p.html#24 what does xp do when system is copying
https://www.garlic.com/~lynn/2007p.html#72 A question for the Wheelers - Diagnose instruction
https://www.garlic.com/~lynn/2007r.html#0 IBM System/3 & 3277-1
https://www.garlic.com/~lynn/2007s.html#33 Age of IBM VM
https://www.garlic.com/~lynn/2007t.html#54 new 40+ yr old, disruptive technology
https://www.garlic.com/~lynn/2007v.html#68 It keeps getting uglier
https://www.garlic.com/~lynn/2008.html#33 JCL parms
https://www.garlic.com/~lynn/2008c.html#10 Usefulness of bidirectional read/write?
https://www.garlic.com/~lynn/2008c.html#78 CPU time differences for the same job
https://www.garlic.com/~lynn/2008g.html#9 Was CMS multi-tasking?
https://www.garlic.com/~lynn/2008h.html#70 New test attempt
https://www.garlic.com/~lynn/2008n.html#50 The Digital Dark Age or.....Will Google live for ever?
https://www.garlic.com/~lynn/2008o.html#53 Old XDS Sigma stuff
https://www.garlic.com/~lynn/2008r.html#21 What if the computers went back to the '70s too?
https://www.garlic.com/~lynn/2008s.html#54 Computer History Museum
https://www.garlic.com/~lynn/2009b.html#71 IBM tried to kill VM?
https://www.garlic.com/~lynn/2009e.html#67 Architectural Diversity
https://www.garlic.com/~lynn/2009h.html#47 Book on Poughkeepsie
https://www.garlic.com/~lynn/2009h.html#72 Operating Systems for Virtual Machines
https://www.garlic.com/~lynn/2009j.html#76 CMS IPL (& other misc)
https://www.garlic.com/~lynn/2009l.html#46 SNA: conflicting opinions
https://www.garlic.com/~lynn/2009m.html#38 33 Years In IT/Security/Audit
https://www.garlic.com/~lynn/2009m.html#71 Definition of a computer?
https://www.garlic.com/~lynn/2009o.html#37 Young Developers Get Old Mainframers' Jobs
https://www.garlic.com/~lynn/2009o.html#77 Is it time to stop research in Computer Architecture ?
https://www.garlic.com/~lynn/2009q.html#73 Now is time for banks to replace core system according to Accenture
https://www.garlic.com/~lynn/2010b.html#61 Source code for s/360 [PUBLIC]
https://www.garlic.com/~lynn/2010d.html#60 LPARs: More or Less?
https://www.garlic.com/~lynn/2010g.html#68 What is the protocal for GMT offset in SMTP (e-mail) header
https://www.garlic.com/~lynn/2010h.html#18 How many mainframes are there?
https://www.garlic.com/~lynn/2010j.html#37 Idiotic programming style edicts
https://www.garlic.com/~lynn/2010k.html#13 Idiotic programming style edicts
https://www.garlic.com/~lynn/2010l.html#33 History of Hard-coded Offsets
https://www.garlic.com/~lynn/2010l.html#61 Mainframe Slang terms
https://www.garlic.com/~lynn/2010n.html#66 PL/1 as first language
https://www.garlic.com/~lynn/2011.html#47 CKD DASD
https://www.garlic.com/~lynn/2011b.html#37 1971PerformanceStudies - Typical OS/MFT 40/50/65s analysed
https://www.garlic.com/~lynn/2011b.html#81 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011e.html#1 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011f.html#71 how to get a command result without writing it to a file
https://www.garlic.com/~lynn/2011g.html#11 Is the magic and romance killed by Windows (and Linux)?
https://www.garlic.com/~lynn/2011g.html#50 My first mainframe experience
https://www.garlic.com/~lynn/2011h.html#17 Is the magic and romance killed by Windows (and Linux)?
https://www.garlic.com/~lynn/2011h.html#18 Is the magic and romance killed by Windows (and Linux)?
https://www.garlic.com/~lynn/2011j.html#13 program coding pads
https://www.garlic.com/~lynn/2011o.html#34 Data Areas?
https://www.garlic.com/~lynn/2011o.html#87 Any candidates for best acronyms?
https://www.garlic.com/~lynn/2011p.html#5 Why are organizations sticking with mainframes?
https://www.garlic.com/~lynn/2011p.html#15 1979 SHARE LSRAD Report
https://www.garlic.com/~lynn/2012.html#36 Who originated the phrase "user-friendly"?
https://www.garlic.com/~lynn/2012.html#96 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2012.html#100 The PC industry is heading for collapse
https://www.garlic.com/~lynn/2012b.html#6 Cloud apps placed well in the economic cycle
https://www.garlic.com/~lynn/2012d.html#7 PCP - memory lane
https://www.garlic.com/~lynn/2012e.html#38 A bit of IBM System 360 nostalgia
https://www.garlic.com/~lynn/2012e.html#98 Burroughs B5000, B5500, B6500 videos
https://www.garlic.com/~lynn/2012g.html#1 Did the 1401 use SVC's??
https://www.garlic.com/~lynn/2012g.html#25 VM370 40yr anniv, CP67 44yr anniv
https://www.garlic.com/~lynn/2012h.html#66 How will mainframers retiring be different from Y2K?
https://www.garlic.com/~lynn/2012i.html#9 Familiar
https://www.garlic.com/~lynn/2012l.html#26 PDP-10 system calls, was 1132 printer history
https://www.garlic.com/~lynn/2012o.html#31 Regarding Time Sharing

--
virtualization experience starting Jan1968, online at home since Mar1970

Regarding Time Sharing

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Regarding Time Sharing
Newsgroups: bit.listserv.ibm-main
Date: 18 Nov 2012 20:31:44 -0800
shmuel+gen@PATRIOT.NET (Shmuel Metz , Seymour J.) writes:
IBM issued sevaral releases of TSS/360 and even had a PRPQ for TSS/370 before they canceled it. TSS performance was considerably better by then.

re:
https://www.garlic.com/~lynn/2012o.html#31 Regarding Time Sharing
https://www.garlic.com/~lynn/2012o.html#32 Regarding Time Sharing

1968 ... i did simulated 35 user cp67/cms for fortran program edit, compile, link and execute ... that had better response and throughput than simulated four user tss/360 for same fortran program, edit, compile, link and execute ... on the same exact 360/67 hardware.

at the time of the test, cp67 had many of the pathlength enhancements mentioned in previous posts ... but not the rewrite of the virtual memory and my dynamic adaptive resource manager.

tss/370 had number of special bids, one was AT&T to do a stripped down TSS/370 kernel (SSUP) that had unix layered ontop. Amdahl was in marketing UTS ... direct unix port ... mostly running in vm370 virtual machine.

Eventually IBM also did port of UCLA LOCUS (unix work-alike) ... also running in vm370 virtual machine ... announced as aix/370 (coupled with aix/386 ... LOCUS supported distributed filesystem as well as processing ... even migrating processes between 386 and 370 with different binaries).

Issue in the period was field maintenanced required EREP and error recovery. At the time, to add mainframe EREP and error recovery to unix kernel was several times larger effort than just the straight forward unix port to mainframe.

one of the early jokes about tss/360 ("official" timesharing) was that it had approx. 1200 people working on it at time when the science center had approx 12 people working on cp67/cms. by the mid-70s, after the tss/360 decommit and before pok convinced corporate to kill-off vm370 and transfer all of the people to POK to work on mvs/xa (mentioned before that endicott manage to save the product mission, but had to reconstite a group from scartch) ... the tss/370 group had 20 people and the vm/370 group (follow-on to cp67) had maybe 300 people ... aka the performance of a system is inversely proportional to the number of people working on it.

spring 1985, i was involved in comparison study of tss and vm/sp kernels; part of that analysis (vm/sp had gotten grossly bloated by that time):
TSS VM/SP modules 109 260 lines of assembler code 51k 232k

misc. posts posts mentioning tss/vmsp comparison study:
https://www.garlic.com/~lynn/2001m.html#53 TSS/360
https://www.garlic.com/~lynn/2006e.html#33 MCTS
https://www.garlic.com/~lynn/2010e.html#17 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)

misc. old email mentioning tss/unix
https://www.garlic.com/~lynn/2006b.html#email800310
https://www.garlic.com/~lynn/2006t.html#email800327
https://www.garlic.com/~lynn/2007e.html#email800404
https://www.garlic.com/~lynn/2006f.html#email800404
https://www.garlic.com/~lynn/2007b.html#email800408
https://www.garlic.com/~lynn/2006e.html#email840109

note: native tss/360 had paged-mapped filesystem ... pagetables could be setup to point directly at executable image on disk ... and could be page directly in and executed ... even relocatable code that could concurrently reside in different virtual address spaces at different virtual addresses. the tss/360 relocatable executable format was quite a bit different thatn os/360 relocatble executable format. In os/360 relocable executable format, the file had to be preloaded into memory and all address constants swizzled to absolute value (representing their loaded location). tss/360 executable address constants were truely relocatable ... i.e. the address constants in the executable image didn't require any fiddling ... the executable image could be mapped to any virtual address and everything worked.

the os/360 convention caused enormous problems for me when i did page mapped filesystem for cp67/cms. cms adopted lots of applications, compilers, loaders, etc. from os/360 ... which required that relocatable address constants be swizzled to absolute address after loading and before execution. for truely high-performance operation ... I had to do a lot of application fiddling to make the relocatable address constants more like tss/360. I had also seen lots of (other) things that tss/360 had done wrong in the page mapped implementation ... which i avoided in my cp67/cms implementation. Then Future System effectively did a similar rote page mapped filesystem desgin with all the other tss/360 performance shortcomings. It was one of the things that I would periodically ridicule the FS group about ... believing what I already had running was better than what they were doing high level specs. on. ... misc. past posts mentioning having done cp67/cms paged-mapped filesystem
https://www.garlic.com/~lynn/submain.html#mmap
misc. past posts mentioning horrors having to deal with os/360 relocable address constants
https://www.garlic.com/~lynn/submain.html#adcon

--
virtualization experience starting Jan1968, online at home since Mar1970

Regarding Time Sharing

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Regarding Time Sharing
Newsgroups: bit.listserv.ibm-main
Date: 19 Nov 2012 07:15:24 -0800
mike@MENTOR-SERVICES.COM (Mike Myers) writes:
As Lynn Wheeler points out, TSS/360 was considered sound by many both in IBM and by at least a handful of IBM customers. I ran across many strong advocates during an assignment at IBM's Watson Research Center at Yorktown Heights, NY in the early '80s. These folks were serious enough to make it's control program design a basis for a competing version of VM/XA, known internally as VM/XB. This was my last project with IBM. I left the company before VM/XB was eventually shelved, and the issue of the competitive design may well have been settled by economic, rather than technical reasons.

re:
https://www.garlic.com/~lynn/2012o.html#30 Regarding Time Sharing
https://www.garlic.com/~lynn/2012o.html#31 Regarding Time Sharing
https://www.garlic.com/~lynn/2012o.html#34 Regarding Time Sharing

I had sponsored an internal advanced technology conference spring of 1982 ... discussed here:
https://www.garlic.com/~lynn/96.html#4a

As consequence of not being able to fix TSO &/or MVS, POK had a project to port CMS to MVS ... as a way of providing interactive services, but as mentioned previously MVS has some fundamental flaws for providing interactive service ... and the implementation never really caught on.

There was also talks on a VM370 based Unix implementation .... with vm370 support for doing unix forks (user having lots of virtual address spaces) as well as the TSS/370 implementation for AT&T Unix.

My theme for the conference was using new generation of software tools and programming methodology to do a new highly efficient kernel ... that could run on both the high-end ... aka SHARE LSRAD report ... I scanned my copy and got SHARE permission to put up on bitsaver here:
http://www.bitsavers.org/pdf/ibm/share/
as well as the emerging microprocessors (aka the old cp67 microkernel from the mid-60s had become enormously bloated and convoluted by this time, as vm370).

This eventually spawned a meeting in kingston cafeteria that was called ZM (the lunchroom personal had gotten the meeting title wrong and put up a sign "ZM" rather than "VM"). This then morphs into VM/XB. The mainstream hudson valley jumped on it as a new microkernel for all operating systems justified because of the high cost of duplicated device support and associated error recovery for three different operating systems (aka MVS, VS1, and VM370 would come to share the same microkernel ... in part eliminated duplicate device support costs). The example being the AT&T effort to use a striped down TSS/370 for all its hardware & device support ... as 370 unix platform.

You knew it was going to be another Future System failure ... when there were 300-500 people writing VM/XB specifications ... when the original objective was to have a small experienced group (no more than 20-30) doing a lean implementation. misc. past posts mentioning FS
https://www.garlic.com/~lynn/submain.html#futuresys

for whatever reason, I still have a bunch of the old ZM stuff ... before the morph into vm/xb.

note that as part of the POK effort to have corporate kill the vm370 product, shutdown the vm370 product group and move all the people to POK to support mvs/xa (justification was not otherwise being able to meet the mvs/xa ship schedule ... which was nearly 8yrs off at the time) ... the vmtool was developed ... an internal only 370/xa virtual machine for supporting development and was never intended to be shipped as product (as mentioned, endicott managed to save the vm370 product mission, but had to reconsitute a development group from scratch).

With the introduction of mvs/xa & 3081 370/xa mode, company realizes that lots of customers could use a migration tool ... being able to concurrently run mvs and mvs/xa temporarily during transition period.

vmtool was packaged for various releases as vm/ma and/or vm/sf. a vm/xa mafia faction grew in hudson valley for a new function product (i.e. the vmtool had numerous performance and functional deficiencies compared to vm/370 of the period).

outside of hudson valley, there was an internal datacenter that got a full function vm/370 running with 370/xa support ... which was enormously better than the vmtool variations of the period ... however hudson valley had managed to justify significant development resources to try and bring the vmtool platform up to vm/370 product level. The internal politics got fairly intense ... and eventually the vm/370 with 370/xa support disappears w/o a trace.

Old email reference 370/xa support in vm370
https://www.garlic.com/~lynn/2011c.html#email860121
https://www.garlic.com/~lynn/2011c.html#email860122
https://www.garlic.com/~lynn/2011c.html#email860123
in this post
https://www.garlic.com/~lynn/2011c.html#87 A History of VM Performance

--
virtualization experience starting Jan1968, online at home since Mar1970

Regarding Time Sharing

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Regarding Time Sharing
Newsgroups: bit.listserv.ibm-main
Date: 19 Nov 2012 07:43:19 -0800
shmuel+gen@PATRIOT.NET (Shmuel Metz , Seymour J.) writes:
We had an SE in the mid 1970's who claimed that IBM was ready to ship a TSS release with a virtual machine capability but pulled the plug on it at the last minute. He claimed that performance was good, and was not a happy camper when it was dropped. I don't know whether VM/XB was based on that work or was done from scratch.

re:
https://www.garlic.com/~lynn/2012o.html#30 Regarding Time Sharing
https://www.garlic.com/~lynn/2012o.html#31 Regarding Time Sharing
https://www.garlic.com/~lynn/2012o.html#34 Regarding Time Sharing
https://www.garlic.com/~lynn/2012o.html#35 Regarding Time Sharing

an objective of VM/XB was to have microkernel that had common device support & error recovery for the mainstream operating systems (significant cost duplication having three different device support and error recovery) ... the stripped down tss/370 kernel example for at&t unix.

as mainstream hudson valley jumped on vm/xb bloating to 500 people writing specs and nobody writing code ... there was contingent that things might still be saved by adapting the stripped down tss/370 kernel. it sacrificies much of my original objective of having microkernel that could run on high-end mainframe (aka LSRAD) as well as low-end (non-370) microprocessors.

while tss/370 virtual machine mid-70s, could come close to vm/370 for virtual guest operation ... it still couldn't match vm370/cms for interactive computing (my work on virtual memory management and dynamic adaptive resource management). however, decade later (mid-80s), vm370 had gotten quite bloated ... while tss/370 had changed little ... and the stripped down kernel for AT&T unix platform had returned to much closer to the original cp67 microkernel.

--
virtualization experience starting Jan1968, online at home since Mar1970

Regarding Time Sharing

Refed: **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Regarding Time Sharing
Newsgroups: bit.listserv.ibm-main
Date: 19 Nov 2012 09:44:37 -0800
mike@MENTOR-SERVICES.COM (Mike Myers) writes:
Lynn:

I'm quite familiar with that project. Three others and I actually implemented a prototype which let a TSO user issue the command CMS which would obtain a block of storage in the TSO address space and load and run the CMS kernel using SIE. Attempts to perform file I/O would interrupt SIE and execute code which implemented the CMS file system and all needed file functions in a VSAM data set, using CI file I/O. A later design would use the VSAM actual block processor, as opposed to CI file I/O. That file system implementation was my contribution to the project.

Two of us were assigned as technical team leaders for the intended product development. We were staffing our teams when the project was killed.


re:
https://www.garlic.com/~lynn/2012o.html#30 Regarding Time Sharing
https://www.garlic.com/~lynn/2012o.html#31 Regarding Time Sharing
https://www.garlic.com/~lynn/2012o.html#34 Regarding Time Sharing
https://www.garlic.com/~lynn/2012o.html#35 Regarding Time Sharing
https://www.garlic.com/~lynn/2012o.html#36 Regarding Time Sharing

for other drift ... given SIE was for vmtool, internal development and virtual guests ... the 3081 implementation was extremely heavy-weight ... executing SIE could consume more elapsed time ... than typical CMS execution ... including 3081 microcode store was extremely constrained ... and the SIE microcode might actually have to be "paged in".

old email about how 3090 was doing a lot of work fixing some of the enormous short-comings of 3081 SIE:
https://www.garlic.com/~lynn/2006j.html#email810630

aka the 3033 was kicked off in parallel with 3081 ... 3033 was real q&d started out mapping 168-3 logic to some other FS chip technology that was 20% faster ... but also had 10 times the circuits per chip. Some logic redesign doing more on-chip ... got 3033 up to 50% faster than 168-3. when 3033 was out-the-door, that group then starts on 3090. 3090 eventually announces 12Feb1985
https://web.archive.org/web/20230719145910/https://www.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3090.html

and other old email about 3090 service processor group (pair of 4361s running highly modified version of vm370 release 6) wanting to use my DUMPRX debug tool (I had originally thought it would be released to customers ... especially after it was in use my nearly every internal datacenter as well as customer support PSRs):
https://www.garlic.com/~lynn/2010e.html#email861031
https://www.garlic.com/~lynn/2010e.html#email861223

the person writing the above didn't realize that I had helped the manager that kicked off the vm370 service processor effort several years earlier ... which happened to also be in the middle of the "tandem memo" days ... aka I had been blameed for online computer conferencing on the internal network in the late 70s and early 80s (folklore is that when executive committee were told about online computer conferencing and the internal network, 5of6 wanted to fire me).

misc. past posts mentioning dumprx
https://www.garlic.com/~lynn/submain.html#dumprx

other issues were that 3081 was quick&dirty effort (in parallel with 3033) kicked off after future system failure ... using some warmed over FS technology. past posts mentioning FS
https://www.garlic.com/~lynn/submain.html#futuresys
more detailed discussion of 3081 being warmed over FS
http://www.jfsowa.com/computer/memo125.htm

for something similar, but different, as undergraduate in the 60s, I modified MVT R18/HASP ... removing 2780 support (to reduce storage footprint) and put in 2741&tty/ascii terminal support along with editor that implemented CMS edit syntax (no way for straight port since the programming enivornments were so different) ... which I always much preferred to TSO (assuming having to deal with os/360 and not being able to use real cms). misc. posts mentioning HASP, JES2, NJI, etc
https://www.garlic.com/~lynn/submain.html#hasp

--
virtualization experience starting Jan1968, online at home since Mar1970

Assembler vs. COBOL--processing time, space needed

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Assembler vs. COBOL--processing time, space needed
Newsgroups: alt.folklore.computers
Date: Mon, 19 Nov 2012 22:27:36 -0500
hancock4 writes:
What about a comparison of the S/360-75 to a PC in terms of workload, using the appropriate peripherals?

My recollection from college days was that a 360-65 handled a substantial amount of batch processing from many users through RJE sites. I don't know how much on-line processing was done back then or how many terminals could be hung on the machine (FWIW, we hung four terminals off of S/360-40 with a mini-CICS).

The -75 used quite a bit of off-line storage in terms of moutable tapes and disks. For the equivalent, I would suggest the PC would use varied flashdrives on all its serial ports, mounting and demounting as jobs came and left. Perhaps some i/O through floppy drives, too.

For on-line service, I would expect the same PC to support straightforward web pages to about ten simultaneous users. Given the limitation of "green-on-glass" applications years ago, that shouldn't be much of a problem, including updating disk files.

For RJE, I would have say ten workstations (perhaps a "thin client") with a flashdrive input (equating the card reader) and a printer.

I would expect the central PC to be able to manage all of the above activity from its console, in an analgous fashion the way HASP and other systems software managed all the above in a large S/360 shop.


re:
https://www.garlic.com/~lynn/2012o.html#24 Assembler vs. COBOL--processing time, space needed

mip rates from:
http://osdir.com/ml/emulators.hercules390.general/2002-10/msg00578.html

2065 360/65 .70 2067 rpq 360/67 .98 2075 360/75 .89 3158-3 370/158 1.00 4341-1 370/4341 .88

dhrystone has vax & 158-3 at 1mips
https://en.wikipedia.org/wiki/Million_instructions_per_second

the 2067 should be same as 2065 except when running in virtual memory mode ... where memory cycle increases from 750ns to 900ns doing the virtual address translation (20% increase in memory cycle time, which should be corresponding decrease in mip rate). 360/65 has been considered closer to .5mip rate in real world, because of heavy interference between i/o activity on memory bus ... and 360/67 then should be 20% slow-down from that (i.e. closer to .4mip).

360/75 considered much faster than 360/65 ... more like 1mip

1979 on early engineering model of first 4341 ... i ran national lab rain benchmark (they were looking at large numbers of 4341 for compute farm). 4341 ran the benchmark 21% faster than 370/158 ... assuming 158 is 1mip processor ... that puts original 4341 more like 1.2mips (rather than .88 mips). Rather than 360/75 and 4341 being approx. same mip rate, 360/75 should be more like same mip rate as 370/158 (with 4341 21% faster). old post with rain benchmark
https://www.garlic.com/~lynn/2000d.html#0

the original post includes this reference comparing cp67-360/67 to vm370-3081
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door

I claim that I did i/o operations with much less than half the pathlength of mvt. so I would claim that mvt on 360/75 would have approx. same i/o rate (or less) than i would get out of cp67 on 360/67 (i also did much better job in cp67 maintaining multiprogramming level keeping processor at 100% utilization ... even with lower overhead)

45 2314 drivers @29mbyte is 1.3gbyte ... less than current real storage sizes (actual 2314 would have somewhat less capacity per pack because of formating overhead)

say customer has 200 2314 packs ... that is 6gbytes ... still less than can get in modern real storage.

2400ft 800bpi ... 20mbytes.
http://www.dispatch.com/live/content/business/stories/2011/06/10/americans-home-equity-near-record-low.html?sid=101

a library with 1000 tapes would be 20gbytes; it is possible to get server system with 32gbytes (and more) of real storage ... more than typical library of disk packs and tape reels.

four 360 selector channels at 1mbyte each ... but sustain would be more like maybe 50 4k disk records/sec and 50 4k tape records/sec (or equivalent) ... say max. sustained 400kbytes/sec transfer. current system would easily handle all storage in real memory ... doesn't even need to do i/o to access the data.

say 100 terminals working non-stop at 10chars/sec is peak 1kbytes/sec. (i had 80 concurrent active users on 360/67 ... so total could easy be over 100).

4800baud for rje ... is 480char/sec ... even ten running non-stop is 5kbytes/sec.

pc with gbit ethernet for communication gives 100mbyte/sec ... could trivially handle all the communication workload (say by 20,000 times)

two chip e5-2600 is clocked at over 500BIPs ... over 500,000 times faster than 360/75. all data from disk and tape being able to fit in real storage ... means that data from tape&disk in real storage would be available at gbyte speeds ... can get ddr3 for e5-2600 at over 10gbyte/sec ... easily tens of thousand times faster.

--
virtualization experience starting Jan1968, online at home since Mar1970

PC/mainframe browser(s) was Re: 360/20, was 1132 printer history

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PC/mainframe browser(s) was Re: 360/20, was 1132 printer history
Newsgroups: alt.folklore.computers
Date: Tue, 20 Nov 2012 11:05:50 -0500
Ahem A Rivet's Shot <steveo@eircom.net> writes:
The first web server and browser ran on NeXTSTEP - so not PCs. Sun boxes were a popular choice[1] for many early web sites (well into the late 1990s and even into the early 2000s).

So no the PC did not enable the take off of the web, UNIX did. The PCs influence was to provide cheap commodity hardware that could run UNIX or XENIX or (later) *BSD and Linux, but there were a lot of unix boxes that weren't PCs (most of them bigger and faster).

[1] Remember "We're the dot in dot com" ? They meant it, dot.com was a Sun box.


first webserver outside cern was on slac vm370/cms system
https://ahro.slac.stanford.edu/wwwslac-exhibit

later migrated to nextstep system.

vm370/cms originated as virtual machine cp67/cms system at the science center ... some past posts
https://www.garlic.com/~lynn/subtopic.html#545tech

gml is (also) invented at the science center in 1969 ... originally implemented as addition to cms script command ... which was somewhat a port of ctss runoff command. "gml" selected since "g", "m", and "l" is the first letter of the three inventors' last names.
https://www.garlic.com/~lynn/submain.html#sgml

a decade later, gml morphs into iso standard sgml ... and after another decade, sgml morphs into html at cern
http://infomesh.net/html/history/early/

before windows there was ms-dos
https://en.wikipedia.org/wiki/MS-DOS
before ms-dos there was seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before seattle computer there was cp/m
https://en.wikipedia.org/wiki/CP/M
and before cp/m, kildall worked on cp/67 (cms) at npg (gone 404, but lives on at the wayback machine)
https://web.archive.org/web/20071011100440/http://www.khet.net/gmc/docs/museum/en_cpmName.html

cp67 traces back to ctss and unix indirectly traces back to ctss by multics ... aka some number of the ctss went to the science center on the 4th flr of 545 tech sq ... others went to multics on 5th flr.
https://en.wikipedia.org/wiki/CP/CMS
ctss
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
comeau's history about producing cp40 (precursor to cp67), given at 1982 seas (european share) meeting
https://www.garlic.com/~lynn/cp40seas1982.txt

trivia question when somebody complained about mosaic using the mosaic name, they changed the name to netscape. from which company did they get the rights to the netscape name. misc. past stories about working as consultants with the company on what would come to be called "electronic commerce"
https://www.garlic.com/~lynn/subnetwork.html#gateway

by the time of the above, we had left IBM ... in large part related to this meeting about cluster scale-up in ellison's conference room early jan1992
https://www.garlic.com/~lynn/95.html#13
some old email about cluster scale-up
https://www.garlic.com/~lynn/lhwemail.html#medusa

possibly within hrs after the last email in the above (end Jan1992), the cluster scale-up effort is transferred and we are told we can't work on anything with more than four processors. a couple weeks later it is announced as ibm supercomputer (17Feb1992)
https://www.garlic.com/~lynn/2001n.html#6000clusters1

two of the other people mentioned in the meeting, later move-on and show up at mosaic/netscape responsible for something called "commerce server" ... and we are brought in on consultants because they want to do payment transactions on the server.

other trivia ... some people from stanford had created workstation and approached ibm about producing it. there was a meeting scheduled at ibm palo alto science center ... and various other organizations from around the company were invited (group in boca that was working on something tha would be announced as ibm/pc, group in yorktown research that was working on workstations from 3rivers, and group in san jose research that was working on prototype office machine that had five M68k processors). after the review, at least the three mentioned groups recommended ibm not produce the workstation (since they all were doing something better) & ibm declines. the stanford people then decide to start their own company.

for other trivia, we were working with NSF and several institutions (including the one that originated mosaic) that were involved in what would become the NSFNET backbone (operational precursor to the modern internet). originally it was targeted as interconnect between the supercomputer centers being funded by congress (as part of national competitiveness program) and we were going to get $20M for the implementation. Congress cuts the budget and several other things happen ... and things get reorged and a NSFNET T1 backbone RFP was released (T1 because I already had T1 and faster links running internally). Internal politics step in (other organizations wanted to win the bid) and we are prevented from bidding. The director of NSF tries to help by writing a letter to the company (endorsed by some number of other gov. agencies) copying the CEO, but that just aggravates the internal politics (including statements that what we already had running is at least five years ahead of all bid submissions). some old related email
https://www.garlic.com/~lynn/lhwemail.html#nsfnet
and past post
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

--
virtualization experience starting Jan1968, online at home since Mar1970

PC/mainframe browser(s) was Re: 360/20, was 1132 printer history

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PC/mainframe browser(s) was Re: 360/20, was 1132 printer history
Newsgroups: alt.folklore.computers
Date: Tue, 20 Nov 2012 11:32:47 -0500
Peter Flass <Peter_Flass@Yahoo.com> writes:
Sun and unix for servers, certainly. All you want for a browser is something that can display graphics, and PCs are the cheapest option. In the early days the web was mostly about text - I think Tim Berners-Lee built it as solution for accessing technical papers. Once it became more about graphics the mainframe looked like a less useful option for browsing. I'd say why not a mainframe with whatever has succeeded the 2250, but I think that would be a workstation like an AIX box these days.

re:
https://www.garlic.com/~lynn/2012o.html#39 PC/mainframe browser(s) was Re: 360/20, was 1132 printer history

follow-on to the 2250 was 3250 ... re-logo'ed from another vendor. in the rs/6000 time-frame there was 5880 (oops, 5080).

the austin aix group started out with ROMP 801/risc processor on project to do a follow-on to the displaywriter. when that was killed, they looked around and decided to retarget the box to the unix workstation market ... and got the company that had done the AT&T unix port to ibm/pc (PC/IX) to do the one for ROMP ... becomes pc/rt and aixv2.

there was also a group at the palo alto science center doing port of bsd unix to vm370 ... that gets redirect to pc/rt.

the romp&pc/rt then gives rise to RIOS 801 chip for rs/6000 and aixv3 (which merges in a lot of stuff from the bsd port to pc/rt). misc. past posts mentioning 801, risc, romp, rios, pc/rt, rs/6000, power, somerset, aim, power/pc, etc
https://www.garlic.com/~lynn/subtopic.html#801

some recent posts mentioning unix on mainframe
https://www.garlic.com/~lynn/2012o.html#34 Regarding Time Sharing
https://www.garlic.com/~lynn/2012o.html#35 Regarding Time Sharing
https://www.garlic.com/~lynn/2012o.html#36 Regarding Time Sharing

the pc/rt has pc/at 16bit bus, rs/6000 moves to the 32bit microchannel bus. the rs/6000 group is directed that they can't do their own adapter cards ... but use cards from other divisions developed for microchannel (mostly boca/ps2). the problem is these other adapters were developed for low-end pc market and not competitive in high-end workstation market. An example is the group had developed its own high-throughput 4mbit token-ring card for the pc/rt 16bit bus. The ps2 16mbit microchannel token-ring card has lower per-card throughput than the pc/rt 4mbit token-ring card. All the other ps2 cards have similar low-thoughput design point (scsi, graphics, etc).

The rs/6000 group was being forced to using microchannel version of 5880 (oops, 5080) ... the acrimony over the issue contributes to the head of the workstation division leaving the company. However, along the way, a ploy to get around the company directive is to produce a rs/6000 model 730 with a vmebus ... which "forces" them to use high-end graphics from non-IBM source (since there was no other groups inside ibm producing pc/workstation vmebus products).

for othe drift ... pc/rt had a large "megapel" display. at interop '88 ... some past posts
https://www.garlic.com/~lynn/subnetwork.html#interop88

I had pc/rt with megapel in a non-IBM booth (in the center area on corner at right angle to the sun booth).

for other drift ... recent posts comparing e5-2600 to high-end max configured z196 & zEC12 mainframes:
https://www.garlic.com/~lynn/2012d.html#41 Layer 8: NASA unplugs last mainframe
https://www.garlic.com/~lynn/2012d.html#50 Layer 8: NASA unplugs last mainframe
https://www.garlic.com/~lynn/2012d.html#64 Layer 8: NASA unplugs last mainframe
https://www.garlic.com/~lynn/2012e.html#3 NASA unplugs their last mainframe
https://www.garlic.com/~lynn/2012e.html#4 Memory versus processor speed
https://www.garlic.com/~lynn/2012e.html#94 Can Mainframes Be Part Of Cloud Computing?
https://www.garlic.com/~lynn/2012e.html#99 Can Mainframes Be Part Of Cloud Computing?
https://www.garlic.com/~lynn/2012e.html#105 Burroughs B5000, B5500, B6500 videos
https://www.garlic.com/~lynn/2012f.html#0 Burroughs B5000, B5500, B6500 videos
https://www.garlic.com/~lynn/2012f.html#4 Can Mainframes Be Part Of Cloud Computing?
https://www.garlic.com/~lynn/2012f.html#7 Burroughs B5000, B5500, B6500 videos
https://www.garlic.com/~lynn/2012g.html#36 Should IBM allow the use of Hercules as z system emulator?
https://www.garlic.com/~lynn/2012g.html#38 Should IBM allow the use of Hercules as z system emulator?
https://www.garlic.com/~lynn/2012h.html#4 Think You Know The Mainframe?
https://www.garlic.com/~lynn/2012h.html#20 Mainframes Warming Up to the Cloud
https://www.garlic.com/~lynn/2012h.html#35 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012h.html#52 How will mainframers retiring be different from Y2K?
https://www.garlic.com/~lynn/2012h.html#62 What are your experiences with Amdahl Computers and Plug-Compatibles?
https://www.garlic.com/~lynn/2012h.html#70 How many cost a cpu second?
https://www.garlic.com/~lynn/2012i.html#11 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2012i.html#16 Think You Know The Mainframe?
https://www.garlic.com/~lynn/2012i.html#84 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2012i.html#88 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2012j.html#1 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2012j.html#34 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2012j.html#46 Word Length
https://www.garlic.com/~lynn/2012j.html#66 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012j.html#95 printer history Languages influenced by PL/1
https://www.garlic.com/~lynn/2012j.html#96 The older Hardware school
https://www.garlic.com/~lynn/2012k.html#41 Cloud Computing
https://www.garlic.com/~lynn/2012l.html#20 X86 server
https://www.garlic.com/~lynn/2012l.html#27 PDP-10 system calls, was 1132 printer history
https://www.garlic.com/~lynn/2012l.html#28 X86 server
https://www.garlic.com/~lynn/2012l.html#30 X86 server
https://www.garlic.com/~lynn/2012l.html#34 X86 server
https://www.garlic.com/~lynn/2012l.html#42 I.B.M. Mainframe Evolves to Serve the Digital World
https://www.garlic.com/~lynn/2012l.html#51 Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012l.html#56 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012l.html#59 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012l.html#81 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012l.html#87 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012l.html#88 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012l.html#90 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012l.html#100 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012m.html#3 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012m.html#4 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012m.html#5 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012m.html#13 Intel Confirms Decline of Server Giants HP, Dell, and IBM
https://www.garlic.com/~lynn/2012m.html#28 I.B.M. Mainframe Evolves to Serve the Digital World
https://www.garlic.com/~lynn/2012m.html#31 Still think the mainframe is going away soon: Think again. IBM mainframe computer sales are 4% of IBM's revenue; with software, services, and storage it's 25%
https://www.garlic.com/~lynn/2012m.html#43 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012m.html#67 How do you feel about the fact that today India has more IBM employees than any of the other countries in the world including the USA.?
https://www.garlic.com/~lynn/2012n.html#9 How do you feel about the fact that today India has more IBM employees than any of the other countries in the world including the USA.?
https://www.garlic.com/~lynn/2012n.html#13 System/360--50 years--the future?
https://www.garlic.com/~lynn/2012n.html#14 System/360--50 years--the future?
https://www.garlic.com/~lynn/2012n.html#44 Under what circumstances would it be a mistake to migrate applications/workload off the mainframe?
https://www.garlic.com/~lynn/2012n.html#45 Under what circumstances would it be a mistake to migrate applications/workload off the mainframe?
https://www.garlic.com/~lynn/2012n.html#48 Under what circumstances would it be a mistake to migrate applications/workload off the mainframe?
https://www.garlic.com/~lynn/2012n.html#50 history of Programming language and CPU in relation to each
https://www.garlic.com/~lynn/2012n.html#56 Under what circumstances would it be a mistake to migrate applications/workload off the mainframe?
https://www.garlic.com/~lynn/2012n.html#69 Under what circumstances would it be a mistake to migrate applications/workload off the mainframe?
https://www.garlic.com/~lynn/2012n.html#70 Under what circumstances would it be a mistake to migrate applications/workload off the mainframe?
https://www.garlic.com/~lynn/2012n.html#72 Mainframes are still the best platform for high volume transaction processing
https://www.garlic.com/~lynn/2012o.html#6 Mainframes are still the best platform for high volume transaction processing
https://www.garlic.com/~lynn/2012o.html#11 Mainframes are still the best platform for high volume transaction processing
https://www.garlic.com/~lynn/2012o.html#21 Assembler vs. COBOL--processing time, space needed
https://www.garlic.com/~lynn/2012o.html#25 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012o.html#38 Assembler vs. COBOL--processing time, space needed

--
virtualization experience starting Jan1968, online at home since Mar1970

PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory
Newsgroups: alt.folklore.computers
Date: Tue, 20 Nov 2012 11:45:58 -0500
Walter Banks <walter@bytecraft.com> writes:
A significant amount of the web was in place by the time of the Sussex conference in Sept 1976. By that time there were multiple networks and a few network to network gateways. A year later email and file transfer between networks was common.

re:
https://www.garlic.com/~lynn/2012o.html#39 PC/mainframe browser(s) was Re: 360/20, was 1132 printer history
https://www.garlic.com/~lynn/2012o.html#40 PC/mainframe browser(s) was Re: 360/20, was 1132 printer history

there was some number of online virtual machine based service bureaus dating back to the 60s. A large one on the west coast was tymshare ... and they had their own proprietary network, tymnet. One of their enhancements to vm370/cms was an online computer conferencing system.

in aug1976, tymshare started offering the online computer conferencing system free to (ibm user group) SHARE ... as VMSHARE; vmshare archive (dating back to aug1976)
http://vm.marist.edu/~vmshare
sometimes(?) "404" ... but also at wayback machine
http://vm.marist.edu/~vmshare/

--
virtualization experience starting Jan1968, online at home since Mar1970

360/20, was 1132 printer history

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360/20, was 1132 printer history
Newsgroups: alt.folklore.computers
Date: Tue, 20 Nov 2012 11:49:06 -0500
Dan Espen <despen@verizon.net> writes:
Within my POE, we still have FOCUS applications.

recent posts 4gl history starting out on virtual-machine based online commerical service
https://www.garlic.com/~lynn/2012b.html#60 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2012d.html#51 From Who originated the phrase "user-friendly"?
https://www.garlic.com/~lynn/2012e.html#84 Time to competency for new software language?
https://www.garlic.com/~lynn/2012n.html#30 General Mills computer

--
virtualization experience starting Jan1968, online at home since Mar1970

Regarding Time Sharing

Refed: **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Regarding Time Sharing
Newsgroups: bit.listserv.ibm-main
Date: 20 Nov 2012 09:11:05 -0800
mike@MENTOR-SERVICES.COM (Mike Myers) writes:
I'm quite familiar with that project. Three others and I actually implemented a prototype which let a TSO user issue the command CMS which would obtain a block of storage in the TSO address space and load and run the CMS kernel using SIE. Attempts to perform file I/O would interrupt SIE and execute code which implemented the CMS file system and all needed file functions in a VSAM data set, using CI file I/O. A later design would use the VSAM actual block processor, as opposed to CI file I/O. That file system implementation was my contribution to the project.

Two of us were assigned as technical team leaders for the intended product development. We were staffing our teams when the project was killed.


re:
https://www.garlic.com/~lynn/2012o.html#34 Regarding Time Sharing
https://www.garlic.com/~lynn/2012o.html#37 Regarding Time Sharing

... and much more efficient mapping than VSAM ...

os/360 did contiquous allocation ... as well as multi-block read and even concurrent multi-buffer overlapped asynchronous operation.

cms, unix and various systems can trace back to CTSS either directly or indirectly. base cms did scatter allocation, contiguous only occurring accidentally. it would try and do 64kbyte executable load ... which would only result in real multi-block operation if it mapped to sequential contiguous.

lots of page-mapped filesystem would do the memory mapping of virtual address space to portion of filesystem and then actual load only happening with serialized, synchronous page-faults ... single 4k at a tim (not multi-record operation, no asynchronous overlapped buffers, etc).

when I first did cms page-mapped ... it raised the filesystem abstraction ... eliminating lots of the overhead involved in simulating channel program operations as overhead elimination when there was a one-to-one mapping between the filesystem operations and the virtual memory operations. the issue was not to fall prey to loading via pure page fault operation with the inefficiency of every page a serialized, synchronous, non-overlapped 4k transfers.

unix filesystem has done this with lots of hints between the filesystem operation and the disk management ... with things like read-ahead, write-behind, and various other things ... to improve disk operating throughput and overlapped operation. Later unix & unix-like systems have also done contiguous allocation support. However, some downside was almost all disk i/o went through system buffers and required copying between system buffers and application buffers. POSIX aysnch I/O initially was targeted for large multi-threaded DBMS applications to do asynchronous I/O directly to/from application address space.

so another part of the initial cp67/cms paged-mapped filesystem operation was be able to maximize multi-block asynchronous disk transfers (not loosing a lot of disk performance via synchronous serialized page faults) ... as well as adding the logic to explicitly do contiguous allocation.

one of the fall-outs of page-mapped was it was able to leverage lots of other work I had done to the virtual memory system to optimize page i/o operations (like dynamically creating multi-record I/O chains optimized for optimial device transfer). the result then both significantly descreased the total filesystem cpu processing ... but for moderate I/O intensive applications improved the elapsed time throughput by a factor of three times.

some old numbers comparing page-mapped and non-page-mapped CMS filesystem operation on 3330 disks
https://www.garlic.com/~lynn/2006.html#25 DCS as SWAP disk for z/Linux

later on 3380, I could frequently get 3times the throughput of unmodified CMS filesystem.

in the referenced comparisons, the non-paged-mapped are physical disk SIOs, independent of number of 4k block transfers, while the numbers for paged-mapped are the number of 4k block transfers (independent of the number of physical I/Os, lower elapsed time implies fewer physical I/Os with larger number of blocks transferred per I/O).

the page-mapped interface also provided support for shared image function ... i.e. page-mapping supporting same physical image concurrent in multiple different virtual address spaces. this replaced the vm370 DMKSNT (SAVESYS) requiring privileged operation to define shareable images.

I would claim that the issue of serialized, synchronization with page-fault oriented paged-mapped systems is similar to the serialized synchronization that the mainframe channel oriented FICON layer significantly reduced throughput compared to the underlying fibre-channel ... discussed in these recent posts:
https://www.garlic.com/~lynn/2012o.html#25 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012o.html#27 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee

as aside, one of my hobbies was providing production operating systems to internal datacenters. when the science center was still on cp67, the future system effort was in progress, and company moving to 370s (& vm370) customers started to drop off ... but then science center got 370 and I migrated a bunch of stuff from cp67 to vm370 (including the paged mapped filesystem stuff) ... the internal datacenter customers picked up again ... including places like the internal world-wide sales&marketing HONE sytems. some past posts mentioning HONE
https://www.garlic.com/~lynn/subtopic.html#hone

old email refs:
https://www.garlic.com/~lynn/2006v.html#email731212
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

--
virtualization experience starting Jan1968, online at home since Mar1970

PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory
Newsgroups: alt.folklore.computers
Date: Tue, 20 Nov 2012 12:48:35 -0500
Ahem A Rivet's Shot <steveo@eircom.net> writes:
The internet was certainly there before the PC, but the WWW I take to mean HTML delivered by HTTP.

re:
https://www.garlic.com/~lynn/2012o.html#39 PC/mainframe browser(s) was Re: 360/20, was 1132 printer history
https://www.garlic.com/~lynn/2012o.html#40 PC/mainframe browser(s) was Re: 360/20, was 1132 printer history
https://www.garlic.com/~lynn/2012o.html#41 PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory

tcp/ip great change-over was 1oct1983 ... to internetworking protocol

tcp/ip was the technology basis for the modern internet, nsfnet backbone was the operational basis for the modern internet, and cix was the business basis for the modern internet. some past posts mentioing NSFNET backbone
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

i've claimed that the internal network was larger than the arpanet/internet from just about the beginning until possibly late '85 or sometime early '86. partially enabled that, was that (at least virtual machine based) internal network nodes had a form of gateway from just about the beginning. some past posts mentioning internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

part of the internet exceeding internal network in number of nodes was that the communication group was actively trying to preserve their dumb terminal emulation base ... restricting internal nodes to mainframes ... while internet nodes were starting to appear on minis and workstations (and then PCs). some past posts
https://www.garlic.com/~lynn/subnetwork.html#emulation

late 80s, communication group finally got internal network converted to SNA/VTAM ... in part justifying it with a misinformation program. also part of internal politics preventing us from bidding on nsfnet backbone was other misinformation from the communication group how sna/vtam could be applicable for the nsfnet backbone. somebody associated with the communication group started collecting mis-information emails and then redistributed the collection to interested parties. reference to the mis-information collection distribution here:
https://www.garlic.com/~lynn/2006w.html#email870109

other mis-information eamil regarding moving internal network to sna
https://www.garlic.com/~lynn/2006x.html#email870302
https://www.garlic.com/~lynn/2011.html#email870306

also in the late 80s, a senior disk engineer got a talk scheduled at annual, world-wide, internal communication group conference and opened the talk with statement that the communication group was going to be responsible for the demise of the disk division. The issue was that communication group had strategic responsibility for everything that cross the datacenter walls ... and in their efforts to preserve the dumb terminal install base (including dumb terminal emulation) and fighting off client/server and distributed computing .. they were strangling mainframe datacenters. The disk division was starting to see first effects with drop in disk sales as data was fleeing the datacenter to more distributed computing friendly platforms. The disk division had developed several solutions to address the problems, but the communication group was constantly blocking the solutions (since they had corporate strategic ownership of everything that crossed datacenter walls/perimeter).

--
virtualization experience starting Jan1968, online at home since Mar1970

PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory
Newsgroups: alt.folklore.computers
Date: Tue, 20 Nov 2012 13:12:41 -0500
Anne & Lynn Wheeler <lynn@garlic.com> writes:
tcp/ip great change-over was 1oct1983 ... to internetworking protocol

re:
https://www.garlic.com/~lynn/2012o.html#44 PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory

sorry, mistype, it was 1jan1983

some old internet related posts
https://www.garlic.com/~lynn/internet.htm

references
https://www.garlic.com/~lynn/internet.htm#email821022

ibm san jose research getting (phonenet) email gateway into csnet & arpanet

talks about standards work leading up to tcp/ip cut-over
https://www.garlic.com/~lynn/internet.htm#1

part of the arpanet issue was requirement for homogeneous network using IMPs which were relatively tightly controlled ... seems to be approx. 100 IMPs at the time of the 1jan1983 (connecting possibly 250 or so hosts)

discussion of the 1Jan83 ARPANET transition to TCP/IP
https://www.garlic.com/~lynn/2000e.html#email821230
and there still being lingering cutover problems
https://www.garlic.com/~lynn/2000e.html#email830202

in this post
https://www.garlic.com/~lynn/2000e.html#18 Is Al Gore The Father of the Internet?

other posts in this thread:
https://www.garlic.com/~lynn/2012o.html#39 PC/mainframe browser(s) was Re: 360/20, was 1132 printer history
https://www.garlic.com/~lynn/2012o.html#40 PC/mainframe browser(s) was Re: 360/20, was 1132 printer history
https://www.garlic.com/~lynn/2012o.html#41 PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory

--
virtualization experience starting Jan1968, online at home since Mar1970

Random thoughts: Low power, High performance

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Random thoughts: Low power, High performance
Newsgroups: comp.arch
Date: Tue, 20 Nov 2012 17:46:55 -0500
Stephen Fuld <SFuld@alumni.cmu.edu.invalid> writes:
It depends on what you mean by "something like that". Since the beginning of S/360, IBM has had separate I/O processors. But they are not the same, or even a subset of the ISA for the main processor. They are tailored specifically for the needs of the task at hand, namely processing I/O requests. ISTM that the idea of using the same ISA for the I/O processor and the main processor is an artificial requirement. The needs of an I/O processor are sufficiently different that a much simpler processor is all that is required.

BTW, IBM was certainly not the only one to do this. Most of the mainframe processors had separate I/O hardware. The fact that this idea was eliminated by virtually all current microprocessors in favor of memory mapped I/O to a plethora of different software interfaces is, to me, a substantial mistake.


1988 I was asked if I could help LLNL with standardization of some serial technology that eventually evolves into fibre-channel standard.

IBM had some serial fiber technology that had been knocking around POK from late 70s that eventually makes it out as ESCON ... at 200mbit/sec transfer ... but traditional s/360 half-duplex makes aggregate transfer to maybe 17mbytes/sec. in the 90s, some of the pok mainframe channel engineers start to participate in fibre channel standard working on layering mainframe channel conventions on top of fibre channel ... that significantly reduces its effective throughput ... this eventually evolves into FICON ... as replacement to ESCON.

more recently IBM has made some enhancements to FICON ... that allows it to use more of the underlying fibre channel capability allowing it to mitigate some of the FICON throughput degradation ... improving FICON throughput by apparently about 3times.
http://www-03.ibm.com/systems/z/hardware/connectivity/ficon_performance.html

peak z196 at 2M IOPS with 104 FICON channels, 14 storage subsystems, and 14 system assist processor ftp://public.dhe.ibm.com/common/ssi/ecm/en/zsw03169usen/ZSW03169USEN.PDF

however, there is mention that 14 system assist processors peak is 2.2M SSCH/sec running at 100% busy ... but recommendations are keeping SAPs at 70% or less (1.5M SSCH/sec).

reference to a (single) fibre-channel for e5-2600 capable of over million IOPS (compared to peak z196 using 104 FICON channels to get to 2M IOPS):
http://www.emulex.com/artifacts/0c1f55d0-aec6-4c37-bc42-7765d5d7a70e/elx_wp_all_hba_romley.pdf

recent posts
https://www.garlic.com/~lynn/2012o.html#22
https://www.garlic.com/~lynn/2012o.html#25
https://www.garlic.com/~lynn/2012o.html#27

mentioning having done work supporting NSC HYPERchannel for channel extender (for internal datacenters) in 1980 ... something very similar was done for fiber channel standard in late 80s and early 90s ... and part of the FICON recent improvement is somewhat similar with something called "TCW" (done nearly 30yrs later).

the most recent announced mainframe zEC12, max configuration is 50% more processing than (previous) max. configured z196 ... in part going from max. 80 processors to max 101 processors. However, documentation is that it only is 30% more for DBMS work (little or no improvements in I/O ... mainly more processing power).

note that s/360 were mostly integrated channels ... i.e. processor engine shared between microcode executing 360 instructions and microcode executing channel functions ... it wasn't until you get to 360/65 that you have separate channel hardware boxes. for 370s, integrated channels went up through 370/158; it wasn't until 370/168 that you have separate channel hardware.

--
virtualization experience starting Jan1968, online at home since Mar1970

PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory
Newsgroups: alt.folklore.computers
Date: Wed, 21 Nov 2012 09:36:31 -0500
Morten Reistad <first@last.name> writes:
(snip internal IBM stuff. SNA would never have flown on the Internet).

re:
https://www.garlic.com/~lynn/2012o.html#44 PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory
https://www.garlic.com/~lynn/2012o.html#45 PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory

trivia: first appended email in the collection of communication group misinformation about SNA/VTAM for NSFNET T1 backbone (person doing the collection prefixed with editorial comments effectively that they were blowing smoke, it would never work, in part because most of the sites didn't have processor running sna/vtam)
https://www.garlic.com/~lynn/2006w.html#email870109

was from one of the executives involved later in transferring cluster scale-up and saying that we weren't allowed to work on anything with more than four processors ... occurred possibly only hours after the last email hear (late jan1992)
https://www.garlic.com/~lynn/lhwemail.html#medusa
and a couple weeks later announced as supercomputer (for numerical intensive and scientific only) ... 17Feb1992
https://www.garlic.com/~lynn/2001n.html#6000clusters1

this is after work on both scientific with national labs as well as commercial ... reference to early jan1992 meeting in ellison's conference room
https://www.garlic.com/~lynn/95.html#13

also discussed in this recent thread
https://www.garlic.com/~lynn/2012o.html#22 Assembler vs. COBOL--processing time, space needed

with HSDT, I had T1 and faster speed links running ... misc. old posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

which I claim was one of the reasons that the NSFNET T1 backbone RFP specified T1 (the winning bid didn't actually do T1 links, they did 440kbit/sec links ... then possibly to provide facade of meeting the terms of the RFP, they put in T1 trunks and telco multiplexor with multiple 440kbit links per T1 trunk, we facetiously commented that they might have just as well called it T5, since possibly some of the 440kbit links &/or T1 trunks were in turn multiplexed over some T5 trunk).

part of the issue was that the communication group only had support for 56kbit/sec links. they even did report for the executive committee claiming customers wouldn't want T1 links unilt well into the 90s. their 37x5 products had support for "fat pipes" ... logically treat multiple parallel 56kbit links as single logical resource. They surveyed "fat pipe" customers and found it dropped to zero by the time it got to six 56kbit links. What they failed to mention was telco tariff for T1 tended to be about the same as five 56kbit ... so customers just went with full T1 and switch to support from non-IBM product. At the time of the communication group report to the executive committee, we did trivial customer survey that turned up 200 such T1 links (at a time when communication group was claiming their wouldn't be any until nearly decade later).

communication group did finally come up with 3737 for SNA that sort-of did T1. It had multiple M68K processors with lots of buffering and spoofed local CTCA to the local VTAM ... it would immediately ACK to the local host that tranission arrived at the remote end ... even before transmission (effectively running non-SNA over the actual telco link). It could reach about 2mbit/sec aggregate over terrestrial link (aka T1 is 1.5mbit full-duplex or 3mbit aggregate, EU T1 is 2mbit full-duplex or 4mbit aggregate). It could do this for straight data transmission, but had to do end-to-end for numerous SNA control RUs. old email
https://www.garlic.com/~lynn/2011g.html#email880130
https://www.garlic.com/~lynn/2011g.html#email880606
https://www.garlic.com/~lynn/2011g.html#email881005

but it was strictly SNA/VTAM ... because it was so tightly tied to spoofing SNA/VTAM, it wasn't useable for any other protocol.

--
virtualization experience starting Jan1968, online at home since Mar1970

PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory
Newsgroups: alt.folklore.computers
Date: Wed, 21 Nov 2012 10:09:37 -0500
Morten Reistad <first@last.name> writes:
This nfsnet backbone was the 1987 upgrade, lasting 5 years. This was when the Internet got traction, and broke out as a free network. This involved the "cix wars" 1991-1992. By Februrary 1992 we had a network infrastructure we would recognise as similar to what we have today.

The network of then was based on remote logins (shell accounts), ftp file transfers and e-mail. All of these used a set of network standards to interoperate, but the applications were all localised into local infrastructure.

TCP/IP carried the day.



https://www.garlic.com/~lynn/2012o.html#44 PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory
https://www.garlic.com/~lynn/2012o.html#45 PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory
https://www.garlic.com/~lynn/2012o.html#47 PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory

as previously mentioned the original NSFNET backbone T1 was going to be interconnect of the federal funded supercomputer centers ... part of congressional national competitiveness program ... and I was going to get $20M to do the implementation. Funds then got cut, things re-org, and then RFP released ... by which time, corporate politics prevented us from bidding (even with support from director of NSF and some other agencies).
https://www.garlic.com/~lynn/2012o.html#39 PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory

prior to that NSF had been funding funding for point-to-point slower-speed links as part of CSNET (dating back to early 80s).

there was then a follow-on NSFNET backbone T3 upgrade RFP. possibly because they thought they would shutdown my snipping ... I was asked to be the redteam ... and a couple dozen people from half-dozen labs around the world were the blue team. At the final executive review, I presented first ... and then five minutes into the blue team presentation ... the executive (running the review) pounded and the table and said he would lay down in front of garbage truck before allowing any but the blue team proposal to go forward.

there was claim that standard telcos were in chicken&egg corner ... they had significant base run rate ... covered by transmission rate use charges. they weren't going to get development of new bandwidth hungry applications w/o significant reduction in tariffs (which would put them into the red for years). the "solution" is to "over-provision" the initial NSFNET T1 backbone ... sort of as an closed innovation technology incubator. Even tho the T1 backbone was only running 440kbit/sec and the RFP award was for $11.2M ... the claim was that something closer to $50M worth of resources was put into the effort (backbone rules would prevent commercial use thereby not effecting standard telco revenue ... but still allowing closed incubator for development of the bandwidth hungry applications).

--
virtualization experience starting Jan1968, online at home since Mar1970

PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory
Newsgroups: alt.folklore.computers
Date: Wed, 21 Nov 2012 12:33:27 -0500
Anne & Lynn Wheeler <lynn@garlic.com> writes:
as previously mentioned the original NSFNET backbone T1 was going to be interconnect of the federal funded supercomputer centers ... part of congressional national competitiveness program ... and I was going to get $20M to do the implementation. Funds then got cut, things re-org, and then RFP released ... by which time, corporate politics prevented us from bidding (even with support from director of NSF and some other agencies).

re
https://www.garlic.com/~lynn/2012o.html#44 PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory
https://www.garlic.com/~lynn/2012o.html#45 PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory
https://www.garlic.com/~lynn/2012o.html#47 PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory
https://www.garlic.com/~lynn/2012o.html#48 PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory

other trivia from the 80s NSF supercomputer era ... NSF gave Univ. of Cali. $120M for supercomputer center at Berkeley. UofC board of regents said that their master plan called for the next new building to go in at UC San Diego ... and so the funds were diverted to UC San Diego ... becoming the San Diego Supercomputing Center.

--
virtualization experience starting Jan1968, online at home since Mar1970

What will contactless payment do to security?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Date: 22 Nov 2012
Subject: What will contactless payment do to security?
Blog: Information Security
Different contactless have used different technology ... simplest have used straight RFID read-only technology developed for inventory & electronic barcode ... basically same information encoded in magstripe is coded for RFID signal ... don't actually need physical contact to retrieve the (magstripe) information. This gives rise to lots of stuff about carrying the cards in radio-frequency shielded wallets.

A little more complex are some of the contactless developed for transit/metro.

In the late 90s, when I was doing something akin to the payment contact chip standard ... but w/o a list of their known vulnerabilities (semi-facetious claim that I would take $500 milspec part, aggressive cost reduce by 2-3 orders of magnitude while making it more secure), I was approached by members of the transit industry wanting it to also have it support transit contactless. This required lots of chip power consumption and time constraint design considerations (100ms elapsed time with RF signal&power at 10cm) ... but being more secure than the contact payment chipcards. It took a little invention to not sacrifice any security and still do the transaction within the RF power and elapsed time constraints. some references
https://www.garlic.com/~lynn/x959.html#aads

The response they had gotten previously from a contact payment card was each transit turnstyle have a 10ft long RF singal tunnel in front that riders would slowly walk through on the approach to the turnstyle. A standard contact payment card would be in a special RF sleave with battery power (rather than RF both providing power and communication) ... and if the riders walked through the tunnel slow enough, the transaction might complete by the time they transited the 10ft tunnel.

There was a large pilot contact chip payment card deployment in the US at the first century. This was in the YES CARD period ... i.e. trivial to make a clone/counterfeit card. Reference in this post (gone 404 but still lives at the wayback machine):
https://web.archive.org/web/20030417083810/http://www.smartcard.co.uk/resources/articles/cartes2002.html

see the end of the above post. The terminal interaction with the trivial clone/counterfeit YES CARD 1) was the correct PIN entered, card always responds "YES", 2) should the transaction be offline, card always responds "YES", 3) is transaction within open-to-buy, card always responds "YES". (aka sequence responsible for the YES CARD label). In the aftermath all evidence of the large US pilot appears to disappear w/o a trace, and the US market becomes extremely risk adverse, willing to wait until such technologies are thoroughly vetted in other markets.

Law enforcement officers make presentations at (US) ATM Integrity Task Force meetings ... resulting in comment from the audience that "they have managed to spend billions of dollars to prove chips are less secure than magstripe". The issue is that countermeasure to counterfeit/cloned magstripe is to deactivate/close the account. In the case of clone/counterfeit YES CARD ... since the transaction doesn't go online, deactivating the account has no effect (aka reference in the cartes 2002 article that once a clone/counterfeit YES CARD is created, it goes on forever).

past posts reference YES CARD
https://www.garlic.com/~lynn/subintegrity.html#yescard

--
virtualization experience starting Jan1968, online at home since Mar1970

Lotus: Farewell to a Once-Great Tech Brand

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Date: 22 Nov 2012
Subject: Lotus: Farewell to a Once-Great Tech Brand
Blog: IBMers
Lotus: Farewell to a Once-Great Tech Brand
http://techland.time.com/2012/11/20/lotus-farewell-to-a-once-great-tech-brand/
Wolfgang Gruener of Tom's Hardware alerted me to a bit of news that, while minor, has left me in a wistful mood. IBM is planning to remove the Lotus branding from its Notes and Domino workgroup products. It's the apparent end of Lotus, a brand which was launched in 1982 with the 1-2-3 spreadsheet, the most important productivity application of its era.

... snip ...

Later half of the 80s, my wife was co-author of response to gov. agency large campus distributed computing request ... in which middle-layer, 3tier architecture (lots of stuff now referred to as middle ware) was introduced. We were then out pitching 3-tier architecture to customer corporate executives ... and taking lots of arrows in the back from the communication group. This was at a time when they were attempting to preserve their dumb terminal (and terminal emulation) install base and making every effort to fight off client/server and distributed computing. some past posts on the subject
https://www.garlic.com/~lynn/subnetwork.html#3tier

It was in this time-frame that senior disk engineer got a talk scheduled at the internal, world-wide annual communication group conference and opened his talk with the statement that the communication group was going to be responsible for the demise of the disk division. The communication group had strategic ownership of everything that crossed the datacenter walls ... and they had stranglehold on the datacenter (attempting to preserve their dumb terminal/terminal emulation install base, fighting off client/server and distributed computing). The disk division was starting to see what was happening with drop in disk sales as data was fleeing the datacenter for more distributed computing friendly platforms. The disk division had come up with several solutions to address the problem, but constantly vetoed by the communication group. some past posts on the subject
https://www.garlic.com/~lynn/subnetwork.html#terminal

Before lotus there was visicalc: < a href="https://en.wikipedia.org/wiki/VisiCalc">https://en.wikipedia.org/wiki/VisiCalc

one of the visicalc creators was at MIT and then at one of the two virtual-machine-based online service bureaus that spawned in the 60s from ibm cambridge science center's cp67
https://en.wikipedia.org/wiki/Bob_Frankston

from above:
Fellow of the Association for Computing Machinery (1994) "for the invention of VisiCalc, a new metaphor for data manipulation that galvanized the personal computing industry"

... snip ...

misc. past posts mentioning science center
https://www.garlic.com/~lynn/subtopic.html#545tech

--
virtualization experience starting Jan1968, online at home since Mar1970

PC/mainframe browser(s) was Re: 360/20, was 1132 printer history

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PC/mainframe browser(s) was Re: 360/20, was 1132 printer history
Newsgroups: alt.folklore.computers
Date: Thu, 22 Nov 2012 13:39:04 -0500
"Charlie Gibbs" <cgibbs@kltpzyxm.invalid> writes:
Another big factor in the business world is control. Psychologists can argue over the reasons behind the NIH syndrome, but many businesses to this day aren't looking for the best solution, but the one that they control. Witness Lynn's accounts of the political battles at IBM (SNA vs. TCP/IP, token ring vs. Ethernet). To this day there are companies who will attempt to sabotage elegant, open standards in favour of a piece of crap that they own and want to monopolize.

re:
https://www.garlic.com/~lynn/2012o.html#39 PC/mainframe browser(s) was Re: 360/20, was 1132 printer history
https://www.garlic.com/~lynn/2012o.html#40 PC/mainframe browser(s) was Re: 360/20, was 1132 printer history
https://www.garlic.com/~lynn/2012o.html#41 PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory
https://www.garlic.com/~lynn/2012o.html#44 PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory
https://www.garlic.com/~lynn/2012o.html#45 PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory
https://www.garlic.com/~lynn/2012o.html#47 PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory
https://www.garlic.com/~lynn/2012o.html#48 PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory
https://www.garlic.com/~lynn/2012o.html#49 PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory

part of the issue was SNA wan't a networking solution ... it was targeted at managing large number of dumb terminals (later morphing into dumb terminal emulation). later as devices got more sophisticated, they could start to move passed the dumb terminal paradigm ... but the communication had huge infrastructure and position to protect.

at the time original SNA was being formulated, my wife was co-author of AWP39, Peer-to-peer networking (aka ibm architecture white paper #39) ... so it is possible that some in the communication group viewed her as the enemy.

then she joined the jes2/jes3 group and then was con'ed into moving to POK to be in charge of loosely-coupled architecture (ibm mainframe for cluster). while there she did Peer-Coupled Shared Data architecture ... some past posts
https://www.garlic.com/~lynn/submain.html#shareddata

she didn't remain long in the position responsible for loosely-coupled architecture, in part there were lots of skirmishes with the communication group over forcing her to use SNA for loosely-coupled operation ...and little uptake (except for IMS hot-standby) ... until much later with SYSPLEX. another recent reference in (linkedin) IBM discussion that drifted from the original topic
https://www.garlic.com/~lynn/2012o.html#51
which was IBM recent decision to discontinued the Lotus brand
http://techland.time.com/2012/11/20/lotus-farewell-to-a-once-great-tech-brand/

One of the closest networking for SNA was APPN (originated as AWP164) ... at the time, the person responsible and I reported to the same executive. I would kid him about stop trying to help the SNA organization since they would never appreciate anything he did. When it came time to announce APPN, the SNA group objected ... then there was several week period while the issues were escalated ... finally there was a carefully rewritten APPN announcement letter that avoided any implication that APPN was in any way related to SNA.

the issue was the communication group had built up an enormous operation dependent on dumb terminal ... and was finding it impossible to adapt. misc. past posts mentioning dumb terminal & lack of adapting (including numerous references to presentation by senior disk engineer in the late 80s that started out with claim that the communication group was going to be responsible for the demise of the disk division ... because the stranglehold that the communication group had on mainframe datacenter)
https://www.garlic.com/~lynn/subnetwork.html#terminal

in some of the "Boyd" groups (that tend to have some amount of military orientation), there is periodic observation that generals learn little new from winning wars ... it tends to be when they loose that they learn something new. there was reference to Napoleon was in his 30s when he was winning battles agains generals that were in their 60s-80s (and many of Napoleon's senior officers started in their teens). Comment was that Wellington was the same age as Napoleon and studied at the same military schools in France (as napoleon). recent reference in Boyd discussion:
https://www.garlic.com/~lynn/2012j.html#63 Difference between fingerspitzengefuhl and Coup d'oeil?

I had sponsored Boyd briefings at IBM starting in the early 80s ... some past posts
https://www.garlic.com/~lynn/subboyd.html

--
virtualization experience starting Jan1968, online at home since Mar1970

PC/mainframe browser(s) was Re: 360/20, was 1132 printer history

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PC/mainframe browser(s) was Re: 360/20, was 1132 printer history
Newsgroups: alt.folklore.computers
Date: Fri, 23 Nov 2012 11:15:13 -0500
Peter Flass <Peter_Flass@Yahoo.com> writes:
It always puzzles me that businesses and government agencies don't insist on standardization when they buy something. They pick up any old piece of microsoft cr@p and "standardize" on, for example, "word." Two releases later they can't even read their old documents. If everyone just said "NO" we could eliminate this problem forever. Innovation might procede more slowly, but we'd all be better off.

Somebody told me that they had invented word COTS and was behind the govs. move to buy commercial off-the-shelf ... instead of always invented it themselves from scratch. Lots of innovation has huge up-front invention and development costs ... becomes more cost effective if spread over millions of units rather than a couple hundreds.

it also led to federal legislation pushing commercialization of inventions from federal labs ... was well as justification that it would help make the country more competitive on the world stage.

various standardization efforts ... theoretically would lead to being able to directly compare products from competing vendors with competition improving the market and products. this was supposedly part of the transition from the (security) rainbow books to common criteria. however at presentation a few years ago there was observation that nearly all common criteria evaluations had undisclosed deviations ... making apple-to-apple comparison nearly useless/impossible.

nearly the opposite is that organization "standardize" on single solution ... theoretically to minimize its support, maintenance, and training costs.

standardize on something twists the meaning of standards for something.

then there is major market force that tries to protect competitive advantage (control) by proprietary, exclusive features and/or tweaking things to make it different from everything else. this has also come up in discussions about the patent systems; originally to give innovative individual inventors protection from established dominate market forces. the patent system in its current form is almost the exact opposite of the original purpose ... giving the major market forces barriers to competition.

--
virtualization experience starting Jan1968, online at home since Mar1970

Thoughts About Mainframe Developers and Why I Was Oh So Wrong

From: Anne & Lynn Wheeler <lynn@garlic.com>
Date: 23 Nov 2012
Subject: Thoughts About Mainframe Developers and Why I Was Oh So Wrong
Blog: Enterprise Systems
re:
http://lnkd.in/Khsp5r

Thoughts About Mainframe Developers and Why I Was Oh So Wrong
http://insidetechtalk.com/thoughts-about-mainframe-developers-and-why-i-was-oh-so-wrong/

The leading edge of distributed tsunami wave was 4300 systems. Both vax and 4300s sold in the exploding mid-range market in about the same numbers ... old post with decade of vax numbers sliced&diced:
https://www.garlic.com/~lynn/2002f.html#0

the big difference in 4300 numbers were large commercial accounts ordering hundreds at a time for large distributed computing operation. some old 4300 email starting before first customer ship
https://www.garlic.com/~lynn/lhwemail.html#43xx

I did a lot of work for the disk engineering labs in bldg. 14&15 and they let me play disk engineering ... however, I also got lots of processor access. Typically 3rd or 4th engineering model of processor went to bldg 15 for disk testing ... as a result I had better early processor access than some people in development labs ... i.e. i would do benchmarks on the bldg15 4341 for the endicott performance test group. As in the email refs, I did some benchmarks for national lab that were looking at 70 4341s for a compute farm ... but there was also early case of large financial institution wanting to start out with 60 distributed 4341s. old posts getting to play disk engineer in bldgs. 14&15
https://www.garlic.com/~lynn/subtopic.html#disk

the 4361/4381 follow-ons were expecting continued explosion in mid-range ... but as can be seen by the vax numbers, the mid-range/distributed market was already starting to move to workstations & larger PCs.

Note that the internal network was larger than the arpanet/internet from just about the beginning until sometime late '85/early '86 ... in part because (at least the vm support) effectively had a form of gateway built into every node. This is something that the arpanet didn't get until the big switch-over from host/IMPs to internetworking protocol on 1Jan1983. Part of internet passing internal network in number of nodes was workstations&PCs becoming network nodes, while internally, workstations&PCs were being restricted to dumb terminal emulation (the internal network was non-SNA until conversion in late 80s ... at a time, when it would have been much more productive and efficient to convert to tcp/ip).

Late 80s, a senior disk engineer got a talk scheduled at a annual, internal, world-wide communication group conference and opened with statement that the communication group was going to be responsible for the demise of the disk division because of the stranglehold they had on mainframe datacenters (strategic "ownership" of everything that crossed datacenter walls). The issue was the communication group was trying to preserve their dumb terminal emulation install base and fighting off client/server and distributed computing. The disk divison was seeing this in drop in disk sales as data was fleeing in datacenters to more distributed computing friendly platforms. The disk division had come up with solutions to address the problem, but they were being vetoed by the communication group (preserving dumb terminal paradigm and had strategic ownership of everything that crossed datacenter walls). old posts mentioning above
https://www.garlic.com/~lynn/subnetwork.html#terminal

--
virtualization experience starting Jan1968, online at home since Mar1970

360/20, was 1132 printer history

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360/20, was 1132 printer history
Newsgroups: alt.folklore.computers
Date: Sat, 24 Nov 2012 12:26:05 -0500
hancock4 writes:
*Sometimes the flexibility of digital allows it to produce better pictures than could be made on a Kodachrome slide. Kodachrome was somewhat slow (ASA 25, 64, or 200). In low light with a hand held camera, the higher speeds of digital could get a picture, while not optimum, would still be better than what Kodachrome could get with a hand held camera.

Also, Kodachrome was balanced for daylight, taken photos in other light sources required filters. Digital cameras easily switch to accomdate other light sources. Kodachrome slides taken of fluoresecent lit rooms have a sickly green cast unless an FLD filter was used. No prob with digital. Indeed, digital allows exact match to the specific type of tube. It was possible to do that with Kodachrome, but it required a complete cumbersome filter set.


when called into participate in Berkeley "10M" ... now Keck
http://keckobservatory.org/

... part of the effort was moving from film to ccd ... tours of back area of Lick (eastern hills from san jose)
http://www.ucolick.org/

testing 200x200 ccd array (40k pixels) ... rumors at the time that spielberg might be testing 2048x2048 ... 4mpels

part of motivation was ccd was 100 times more sensitive than film (aka only needed 1% of the photons). difficulty were ccd weren't very stable ... (at time) needed to calibrate ccd read-outs for 20-30 seconds with pure white card just before taking image.

keck in 2003
http://keckobservatory.org/news/keck_observatorys_premier_planet-hunting_machine_is_getting_even_better/

from above:
The new CCD system will be a mosaic of three 2048 x 4096 CCD chips with 15-micron pixels arranged in a stacked configuration for an image dimension of 6144 x 4096 pixels, about 130 percent larger in area than the current CCD. The detector will replace the current chip, which is a single 2048 x 2048 device that has larger 24-micron pixels. When choosing a spectrograph for their science, astronomers sometimes have to sacrifice wavelength coverage for spectral resolution. By more than doubling the detector size without reducing the resolution they will be able to have their cake and eat it too

... snip ...

some old email
https://www.garlic.com/~lynn/2007c.html#email830803b
https://www.garlic.com/~lynn/2004h.html#email830804
https://www.garlic.com/~lynn/2007c.html#email830804c
https://www.garlic.com/~lynn/2004h.html#email830822
https://www.garlic.com/~lynn/2004h.html#email830830
https://www.garlic.com/~lynn/2004h.html#email841121
https://www.garlic.com/~lynn/2006t.html#email841121
https://www.garlic.com/~lynn/2004h.html#email860519

misc. past posts
https://www.garlic.com/~lynn/2001.html#73 how old are you guys
https://www.garlic.com/~lynn/2004h.html#7 CCD technology
https://www.garlic.com/~lynn/2004h.html#8 CCD technology
https://www.garlic.com/~lynn/2004h.html#9 CCD technology
https://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
https://www.garlic.com/~lynn/2005l.html#9 Jack Kilby dead
https://www.garlic.com/~lynn/2005s.html#28 MVCIN instruction
https://www.garlic.com/~lynn/2006t.html#12 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2007.html#19 NSFNET (long post warning)
https://www.garlic.com/~lynn/2007c.html#19 How many 36-bit Unix ports in the old days?
https://www.garlic.com/~lynn/2007c.html#20 How many 36-bit Unix ports in the old days?
https://www.garlic.com/~lynn/2007c.html#50 How many 36-bit Unix ports in the old days?
https://www.garlic.com/~lynn/2007d.html#31 old tapes
https://www.garlic.com/~lynn/2007t.html#30 What do YOU call the # sign?
https://www.garlic.com/~lynn/2008f.html#80 A Super-Efficient Light Bulb
https://www.garlic.com/~lynn/2009m.html#82 ATMs by the Numbers
https://www.garlic.com/~lynn/2009m.html#85 ATMs by the Numbers
https://www.garlic.com/~lynn/2009o.html#55 TV Big Bang 10/12/09
https://www.garlic.com/~lynn/2009o.html#60 TV Big Bang 10/12/09
https://www.garlic.com/~lynn/2010i.html#24 Program Work Method Question
https://www.garlic.com/~lynn/2011b.html#49 vm/370 3081
https://www.garlic.com/~lynn/2011b.html#58 Other early NSFNET backbone
https://www.garlic.com/~lynn/2011d.html#9 Hawaii board OKs plan for giant telescope
https://www.garlic.com/~lynn/2011p.html#115 Start Interpretive Execution
https://www.garlic.com/~lynn/2012k.html#10 Slackware
https://www.garlic.com/~lynn/2012k.html#86 OT: Physics question and Star Trek

--
virtualization experience starting Jan1968, online at home since Mar1970

Reduced Symbol Set Computing

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Reduced Symbol Set Computing
Newsgroups: comp.arch
Date: Sat, 24 Nov 2012 16:45:53 -0500
MitchAlsup <MitchAlsup@aol.com> writes:
To be fair, I knew the original 360 (circa 1963) had a bit to switch character order between EBCIDIC and ASCII. So that is where my estimate was based.

How ASCII Came About
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/ASCII.HTM
EBCDIC and the P-Bit (The Biggest Computer Goof Ever)
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM

from above:
Who Goofed?

The culprit was T. Vincent Learson. The only thing for his defense is that he had no idea of what he had done. It was when he was an IBM Vice President, prior to tenure as Chairman of the Board, those lofty positions where you believe that, if you order it done, it actually will be done. I've mentioned this fiasco elsewhere. Here are some direct extracts:


... snip ...

recent refs:
https://www.garlic.com/~lynn/2012.html#100 The PC industry is heading for collapse
https://www.garlic.com/~lynn/2012e.html#52 M68k add to memory is not a mistake any more
https://www.garlic.com/~lynn/2012e.html#55 Just for a laugh... How to spot an old IBMer
https://www.garlic.com/~lynn/2012k.html#73 END OF FILE
https://www.garlic.com/~lynn/2012l.html#36 PDP-10 system calls, was 1132 printer history
https://www.garlic.com/~lynn/2012l.html#84 72 column cards
https://www.garlic.com/~lynn/2012m.html#52 8-bit bytes and byte-addressed machines

--
virtualization experience starting Jan1968, online at home since Mar1970

Regarding Time Sharing

From: Anne & Lynn Wheeler <lynn@garlic.com>
Date: 24 Nov 2012
Subject: Regarding Time Sharing
Blog: Old Geek
re:
http://lnkd.in/G7cxvk

started out with refs to ibm-main thread ... gatewayed to usenet and google groops:
https://groups.google.com/forum/?fromgroups=#!topic/bit.listserv.ibm-main/Lqlnis7oGjM

from long ago and far away ... after previous meeting where cafeteria people had put up "ZM" on sign, instead "VM":

Date: 03/16/83 07:57:17
To: Distribution

The next task force consolidated meeting will take place in IBM Kingston Building 971, on Wednesday March 23, 1983 at 9:15am.(Room 38 & 39) Building 971 is located off the main plant site, but quite close to it. A map, not to scale, follows:

< ... snip ... >

Each study group should be prepared to present their specific findings and recommendations. Wednesdays meeting will be a review by the members of ZM group. On Thursday, xxxxxx will join the discussions for the whole day.

TENTATIVE SCHEDULE

Wednesday 3/23
9:15 11:15 VMC
11:15 12:30 Supervisor
12:30 1:15 Lunch
1:15 2:15 Performance
2:15 3:15 Requirements
3:15 4:15 Bell Labs TSS Performance data
4:15 5:15 Discussion

Copy list:

ZM DESIGN TASKFORCE

<... snip ...>


... snip ... top of post, old email index

in ibm-main thread:
https://www.garlic.com/~lynn/2012o.html#30 Regarding Time Sharing
https://www.garlic.com/~lynn/2012o.html#31 Regarding Time Sharing
https://www.garlic.com/~lynn/2012o.html#34 Regarding Time Sharing
https://www.garlic.com/~lynn/2012o.html#35 Regarding Time Sharing
https://www.garlic.com/~lynn/2012o.html#36 Regarding Time Sharing
https://www.garlic.com/~lynn/2012o.html#37 Regarding Time Sharing
https://www.garlic.com/~lynn/2012o.html#43 Regarding Time Sharing

--
virtualization experience starting Jan1968, online at home since Mar1970

ISO documentation of IBM 3375, 3380 and 3390 track format

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ISO documentation of IBM 3375, 3380 and 3390 track format
Newsgroups: alt.folklore.computers
Date: Sun, 25 Nov 2012 10:00:42 -0500
Dan Espen <despen@verizon.net> writes:
The IBM 1311 had fixed length sectors, so early on IBM had fixed length, went to CKD, then went back to fixed length.

IMO, CKD was always a bad idea.


ios3270 simulated "greed card" ... q&d conversion to html, DASD capacity formulae
https://www.garlic.com/~lynn/gcard.html#26.3

old email mentioning original 3380s had 20 track width spacing between tracks ... later 3380s went to higher density by reducing inter-track spacing (but recording density, aka bytes/track, was the same)
https://www.garlic.com/~lynn/2006s.html#email871122
and for the fun of it (in same post)
https://www.garlic.com/~lynn/2006s.html#email871230

3370 FBA ... mid-range, original thin-film head:
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3370.html
3310 FBA ... entry-level
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3310.html

I've frequently claimed CKD (& multi-track search) was a mid-60s trade-off between scarce real-storage and using I/O resources for finding stuff on disk. multi-track search allowed index on disk and search to find index entry using argument in processor storage. downside was that the search argument was refetched from processor storage for every compare ... requiring device, controller, and channel path to be dedicated to the operation. 2314 had 20 tracks and spun at 2000RPM ... so a full-cylinder index search could take 1/100th of minute, .667 seconds. A multi-cylinder index might require multiple multi-track search. This was extensively used by os/360 (and descendants) for vtoc (master disk directory) and PDS index (most commom use library format).

By at least the mid-70s, the trade-off had inverted with real storage becoming relatively more available and I/O starting to represent greater and greater bottleneck.

In the late 70s, I was brought in to shoot severe performance "bug" at large national retailer ... that had multiple os/360s all sharing some number of common disks (loosely-coupled). The brought me into classroom where the tables were covered with foot-high stacks of performance monitor data from each all the systems (activity taken every several minutes of processor and disk activity). Lots of experts from across IBM had been previously into the customer to look at the problem.

As stores started to wake-up in the morning across the country, throughput would degenerate. Looking at the data, I realized that a certain disk tended to have aggregate peak activity (sum'ed across the different system performance activity reports) of around 6-7 I/Os/second (3330 disks where normally 30-40 IOs/sec).

After some investigation, it turned out that disk contained the common, shared store application library for the whole country (every time an application was invoked at a store ... on any system, it would be loaded from this (PDS format) application library. The PDS index was 3cyls (for all the applications) and it took on arg. 1.5cyl index search for each application. 3330s rotated at 3600rpm and had 19cyls ... so full-cylinder search took 19/60secs (317mills) elapsed time, avg. search took 28.5/60secs (475mills) elapsed time and avg. program load took 30mills. Each application load took on avg. 1/2sec and 3-4 disk i/os. That limited the number of program loads to 2/sec across all their retail stores in the whole nation. The other downside ... any other disks sharing the same controller and/or channel could be "locked out" for at least duration of the longest operation (approx 1/3rd sec).

So the solution was to break the application library into multiple PDS files, organize the highest used at the front of the directory ... and replicate the set of application libraries, one for each computer system on their own private controller/channel.

recent posts mentioning optimizing os/360 disk layout (including forcing highest used PDS members to front of the index and the front of the file) as undergraduate in the 60s ... getting nearly 3times increase in throughput for univ. workload:
https://www.garlic.com/~lynn/2012d.html#7 PCP - memory lane
https://www.garlic.com/~lynn/2012e.html#38 A bit of IBM System 360 nostalgia
https://www.garlic.com/~lynn/2012e.html#98 Burroughs B5000, B5500, B6500 videos
https://www.garlic.com/~lynn/2012h.html#66 How will mainframers retiring be different from Y2K?
https://www.garlic.com/~lynn/2012i.html#9 Familiar
https://www.garlic.com/~lynn/2012o.html#31 Regarding Time Sharing

A similar real-storage trade-off was IMS versus relational dbms. IMS was "physical" DBMS from the 60s ... where disk record location was exposed as part of the DBMS data to applications and used to directly move from one record to next desired record. The original sql/relational was System/R ... some past posts
https://www.garlic.com/~lynn/submain.html#systemr

System/R had gone to higher level abstraction and had eliminated direct disk record pointers as part of application data. The IMS people would claim that System/R doubled the disk space requirement (with implicit indexes buried underneath the application API) and increased the number of disk i/os by possibly 4-5 (for reading index to find disk record). The System/R folks would say that exposing record pointers to application enormously increased maintenance and administration overhead.

By the 80s, the decreasing disk prices significantly mitigated the disk space costs and the increasing system real storage allowed indexes to be cache ... mitigating number of disk I/Os (for RDBMS compared to IMS). At the same time, people skill availability and people costs for mainteance and administration was limiting IMS (it was significantly easier and less costly to deploy RDBMS compared to IMS).

In the late 70s, the company had created 3310 & 3370 "fixed block architecture" for enty-level and mid-range systems (512byte physical blocks). In the early 80s, 3380s CKD high-disk was created ... but had fixed-block "cells" which implemented variable length record formats by simulation with rounding "cell" boundary (this can be seen in formulas calculating number of records that can be formated on each track based on record length). The corporation high-end flag ship operating system MVS (derivation from mid-60s OS/360) only had CKD support (and its current descendent still lacks any sort of FBA support, even though no real CKD device has been manufactured for decades, being simulated on industry standard fixed-block disks). In the early 80s, I was told that I needed a $26M business case to justify adding FBA support to MVS ... even if I provided them fully integrated and tested support ($26M required to cover education, training, and documentation changes). I was only allowed to use incremental new disk sales (not life-time costs) in the business case ... and "oh, by the way", customers were buying all the disk being made as CKD ... so any FBA support would be same amount of sales ... just FBA instead of CKD.

misc. past posts mentioning CKD, FBA, multi-track search, etc
https://www.garlic.com/~lynn/submain.html#dasd

The explosion in mid-range sales starting in the late 70s ... gave the MVS flag-ship high-end operating system problems ... at least since there was no corresponding mid-range CKD disk. Eventually to address the situation the 3375 CKD disk was brought out ... which was CKD simulation ontop of 3370 FBA disk. It was only minor help since part of the mid-range explosion was significant drop in computer costs ... and MVS had extremely high people care&feeding costs ... a company with difficulty finding enough MVS skills to staff a central datacenter would be at a loss to find the skills to support several hundred 4341 MVS systems.

some past posts mentioning getting to play disk engineer in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disk

some old email mentioning 4300s ... the IBM low&mid-range entries
https://www.garlic.com/~lynn/lhwemail.html#43xx

4341s sold into the same mid-range market & similar numbers as vaxes ... old past with decade of vax numbers, sliced&diced by model, year, us/non-us, etc:
https://www.garlic.com/~lynn/2002f.html#0

the big difference for 4341s was the large, several-hundred corporate orders for distributed computing ... in large part vm/4341s.

note that cp67/cms (and vm370/cms) was always logical fixed-block ... even when using CKD disks ... the disks would be pre-formated to fixed length records. later it was trivial for vm370/cms to add 3310/3370 FBA disk support.

recent post in a.f.c. mentioning ckd
https://www.garlic.com/~lynn/2012o.html#21 Assembler vs. COBOL--processing time, space needed

other recent posts mentioning CKD
https://www.garlic.com/~lynn/2012l.html#30 X86 server
https://www.garlic.com/~lynn/2012l.html#78 PDP-10 system calls, was 1132 printer history
https://www.garlic.com/~lynn/2012l.html#81 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012l.html#100 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012m.html#0 PDP-10 system calls, was 1132 printer history
https://www.garlic.com/~lynn/2012m.html#2 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012m.html#3 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012m.html#4 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012m.html#43 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012n.html#65 Fred Brooks on OS/360 "JCL is the worst language ever"
https://www.garlic.com/~lynn/2012n.html#70 Under what circumstances would it be a mistake to migrate applications/workload off the mainframe?

--
virtualization experience starting Jan1968, online at home since Mar1970

ISO documentation of IBM 3375, 3380 and 3390 track format

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ISO documentation of IBM 3375, 3380 and 3390 track format
Newsgroups: alt.folklore.computers
Date: Sun, 25 Nov 2012 10:39:49 -0500
Anne & Lynn Wheeler <lynn@garlic.com> writes:
3370 FBA ... mid-range, original thin-film head:
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3370.html
3310 FBA ... low-range
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3310.html


re:
https://www.garlic.com/~lynn/2012o.html#58 ISO documentation of IBM 3375, 3380 and 3390 track format

more random folklore trivia ... part of floating/flying thin-film head was "air-bearing" simulation in the head design.

the simulation was being run on SJR's (bldg.28 diagonally across from bldg. 15) 370/195 ... run under old MVT system. The batch queue was several weeks ... even with priority access ... it could still be a week turn around for each air-bearing simulation run.

bldg. 15 across the street had gotten early engineering 3033 for disk testing (possibly #3, bldg. 15 frequently got one of the first engineering models, after what was needed by the processor engineers, for disk testing). bldg.14&15 had been running stand-alone, around-the-clock, 7x24, prescheduled testing. At one point they had tried MVS for concurrent testing ... but found it had 15min MTBF (in that environment) ... requiring manual reboot/re-ipl. misc. past posts getting to play disk engineer in bldgs. 14&15
https://www.garlic.com/~lynn/subtopic.html#disk

When I moved to SJR, I was also allowed to wander around other places in the San Jose area. I offered to rewrite I/O supervisor to make it bullet-proof and never fail ... so they could do on-demand, anytime, concurrent testing ... significantly improving productivity. Even all disk testcells running concurrently tended to only use a couple percent of the processor, as a result the engineers setup limited online service on the machines ... able to use the other 98% of the test machines. We offered the guy doing the "air-bearing" simulation setup on the bldg. 15 3033 machine. Optimized 370/195 codes could get 10mips throughput ... while 3033 was more like 4.5mips ... and 1hr optimized 195 run might take 2hrs on 3033 ... but with the 370/195 backlog ... only getting one turn around a week ... "air bearing" simulation could still get a couple turn-arounds a day (easily ten times increase).

past posts mentioning air-bearing simulation:
https://www.garlic.com/~lynn/2001n.html#39 195 was: Computer Typesetting Was: Movies with source code
https://www.garlic.com/~lynn/2005o.html#44 Intel engineer discusses their dual-core design
https://www.garlic.com/~lynn/2006c.html#6 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006d.html#0 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006d.html#14 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006t.html#41 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006u.html#18 Why so little parallelism?
https://www.garlic.com/~lynn/2006x.html#27 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006x.html#31 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2007e.html#43 FBA rant
https://www.garlic.com/~lynn/2007f.html#46 The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007j.html#64 Disc Drives
https://www.garlic.com/~lynn/2007l.html#52 Drums: Memory or Peripheral?
https://www.garlic.com/~lynn/2008k.html#77 Disk drive improvements
https://www.garlic.com/~lynn/2008l.html#60 recent mentions of 40+ yr old technology
https://www.garlic.com/~lynn/2009c.html#9 Assembler Question
https://www.garlic.com/~lynn/2009k.html#49 A Complete History Of Mainframe Computing
https://www.garlic.com/~lynn/2009k.html#75 Disksize history question
https://www.garlic.com/~lynn/2009r.html#51 "Portable" data centers
https://www.garlic.com/~lynn/2011.html#16 Looking for a real Fortran-66 compatible PC compiler (CP/M or DOS or Windows, doesn't matter)
https://www.garlic.com/~lynn/2011f.html#87 Gee... I wonder if I qualify for "old geek"?
https://www.garlic.com/~lynn/2011p.html#26 Deja Cloud?

--
virtualization experience starting Jan1968, online at home since Mar1970

ISO documentation of IBM 3375, 3380 and 3390 track format

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ISO documentation of IBM 3375, 3380 and 3390 track format
Newsgroups: alt.folklore.computers
Date: Sun, 25 Nov 2012 10:46:12 -0500
re:
https://www.garlic.com/~lynn/2012o.html#58 ISO documentation of IBM 3375, 3380 and 3390 track format
https://www.garlic.com/~lynn/2012o.html#59 ISO documentation of IBM 3375, 3380 and 3390 track format

aka floating/flying thin-film heads "flew" closer to the disk surface ... allowing for smaller bits and higher recording bit-density ... as well as smaller inter-track spacing (more bits/track and more tracks per surface). inter-track spacing also comes up in this referenced email
https://www.garlic.com/~lynn/2006s.html#email871230

--
virtualization experience starting Jan1968, online at home since Mar1970

ISO documentation of IBM 3375, 3380 and 3390 track format

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ISO documentation of IBM 3375, 3380 and 3390 track format
Newsgroups: alt.folklore.computers
Date: Sun, 25 Nov 2012 10:58:04 -0500
Anne & Lynn Wheeler <lynn@garlic.com> writes:
bldg. 15 across the street had gotten early engineering 3033 for disk testing (possibly #3, bldg. 15 frequently got one of the first engineering models, after what was needed by the processor engineers, for disk testing). bldg.14&15 had been running stand-alone, around-the-clock, 7x24, prescheduled testing. At one point they had tried MVS for concurrent testing ... but found it had 15min MTBF (in that environment) ... requiring manual reboot/re-ipl. misc. past posts getting to play disk engineer in bldgs. 14&15
https://www.garlic.com/~lynn/subtopic.html#disk


re:
https://www.garlic.com/~lynn/2012o.html#58 ISO documentation of IBM 3375, 3380 and 3390 track format
https://www.garlic.com/~lynn/2012o.html#59 ISO documentation of IBM 3375, 3380 and 3390 track format
https://www.garlic.com/~lynn/2012o.html#60 ISO documentation of IBM 3375, 3380 and 3390 track format

later I did an internal only report on the work for never-fail i/o supervisor and happened to mention the MVS 15min MTBF ... which brought down the wrath of the MVS organization on my head (i believe they would have had me fired if they could have figured out how) ... and would periodically crop up the rest of my career.

I've commented before part of this may have the change in corporate culture after the FS failure. misc. past posts mentioning FS
https://www.garlic.com/~lynn/submain.html#futuresys

part of the scenario was that in wake of FS failure (and sycophancy and make no waves), many careers became seriously oriented towards "managing information up the chain" Ferguson & Morris, "Computer Wars: The Post-IBM World", Time Books, 1993:
... and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with sycophancy and make no waves under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat

... snip ...

another quote from the book:
But because of the heavy investment of face by the top management, F/S took years to kill, although its wrongheadedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

ISO documentation of IBM 3375, 3380 and 3390 track format

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ISO documentation of IBM 3375, 3380 and 3390 track format
Newsgroups: alt.folklore.computers
Date: Sun, 25 Nov 2012 15:13:10 -0500
John Levine <johnl@iecc.com> writes:
Here's the manual for the 3380:

http://bitsavers.informatik.uni-stuttgart.de/pdf/ibm/dasd/GA26-1664-1_3380_IBM_3380_Direct_Access_Storage_Description_and_Users_Guide_Dec81.pdf

It describes the track format on pages 8-12, and it's the same old CKD, format the track before you can do anything else. The drive did have RPS, but I don't think that means there's secretly fixed blocks, just rotational markers on the platter.


re:
https://www.garlic.com/~lynn/2012o.html#58 ISO documentation of IBM 3375, 3380 and 3390 track format
https://www.garlic.com/~lynn/2012o.html#59 ISO documentation of IBM 3375, 3380 and 3390 track format
https://www.garlic.com/~lynn/2012o.html#60 ISO documentation of IBM 3375, 3380 and 3390 track format
https://www.garlic.com/~lynn/2012o.html#61 ISO documentation of IBM 3375, 3380 and 3390 track format

RPS started on CKD 3330 (& fixed head CKD 2305 disk) ... there were actually 20 track surfaces ... 19 for data and 20th containing RPS information.

the effect of search overhead (even single track search) was so egregious ... that RPS provided a little performance compensation.

normal channel program was

seek (move disk arm to specified track) search (equal, <, > on some record or index identifier) tic *-8 (repeat if search didn't match) read/write (data transfer)

search operation would continue to immediately following channel command if the search failed, but would skip over one command if succeed. normal use tended to be initial "stand-alone" seek (start moving the arm, but immediately frees the channel and controller). after disk signals complete, the operation is repeated with the seek chained to the search operation. for random record, single track/record search, the operation ties the channel, controller and disk ... on the avg for half revoluation (where channel & controller is normally shared resource that could be used for lots of stuff). vtoc & PDS index multi-track search could be multiple revolutions ... 3330 could be full cylinder of 19 revolutions

for applications that effectively tried at simulating a form of fixed block ... it was possible to use "set sector" to partially mediate the egregious search overhead (tieing up channel&controller resources)

seek (move disk arm to specified track) set sector (disconnect from channel&controller until specific sector) search (equal, <, > on some record or index identifier) tic *-8 (repeat if search didn't match) read/write (data transfer)

gcard reference for calculating record sector number
https://www.garlic.com/~lynn/gcard.html#26.4

the application tries to specify sector number that is just moments before the start of desired record rotates under the head; the "set sector" command disconnects from channel&controller (in much the same way as "stand-alone" seek) and then has delayed reconnect request when the desired sector is reached ... then it chains to the search command which (hopefully) finds an immediate match on the first record/index checked. This can significantly reduce the (shared resource) channel&controller overhead of doing disk operations ... especially in busy systems with large number of concurrent disk operations using the same shared resources.

The downside was RPS-miss ... in the days before disk&controller level caches ... the "set sector" would try and reconnect ... but if the controller &/or channel was busy with some other disk ... it would not succeed and experience a full revolution delay while the disk rotated back around to the same sector and it tried again. Note that 3310&3370 FBA didn't have the search operation overhead ... and so therefor didn't require the "set sector" mechanism ... locate command providing operation
https://www.garlic.com/~lynn/gcard.html#26.2

I had started talking about decline in relative system disk performance in the 70s ... and disk division executives objected to this comparison that I produced in the early 80s (of relatively system performance between cp67/cms 360/67&2314s with vm370/cms 3081&3380s running similar kinds of workload) ... aka relative system disk throughput had declined by an order of magnitude during the period ... repeated a couple times last decade:
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/94.html#43 Bloat, elegance, simplicity and other irrelevant concepts
https://www.garlic.com/~lynn/94.html#55 How Do the Old Mainframes Compare to Today's Micros?
https://www.garlic.com/~lynn/95.html#10 Virtual Memory (A return to the past?)
https://www.garlic.com/~lynn/98.html#46 The god old days(???)
https://www.garlic.com/~lynn/99.html#4 IBM S/360

They assigned the division performance group to refute my comparison ... but after several weeks ... they came back and said I had slightly understated the case ... not fully taking into account RPS-miss.

the analysis then turned into SHARE (ibm user group) presentation (B874) on some techniques for improving disk system throughput

a few past posts mentioning B874:
https://www.garlic.com/~lynn/2002i.html#18 AS/400 and MVS - clarification please
https://www.garlic.com/~lynn/2002i.html#46 AS/400 and MVS - clarification please
https://www.garlic.com/~lynn/2006f.html#3 using 3390 mod-9s
https://www.garlic.com/~lynn/2006o.html#68 DASD Response Time (on antique 3390?)
https://www.garlic.com/~lynn/2007s.html#5 Poster of computer hardware events?
https://www.garlic.com/~lynn/2007s.html#9 Poster of computer hardware events?
https://www.garlic.com/~lynn/2008c.html#88 CPU time differences for the same job
https://www.garlic.com/~lynn/2009g.html#71 308x Processors - was "Mainframe articles"
https://www.garlic.com/~lynn/2009i.html#7 My Vintage Dream PC
https://www.garlic.com/~lynn/2009k.html#34 A Complete History Of Mainframe Computing
https://www.garlic.com/~lynn/2009k.html#52 Hercules; more information requested
https://www.garlic.com/~lynn/2009l.html#67 ACP, One of the Oldest Open Source Apps
https://www.garlic.com/~lynn/2010c.html#1 "The Naked Mainframe" (Forbes Security Article)
https://www.garlic.com/~lynn/2010h.html#70 25 reasons why hardware is still hot at IBM
https://www.garlic.com/~lynn/2010l.html#31 Wax ON Wax OFF -- Tuning VSAM considerations
https://www.garlic.com/~lynn/2010l.html#32 OS idling
https://www.garlic.com/~lynn/2010l.html#33 History of Hard-coded Offsets
https://www.garlic.com/~lynn/2010n.html#18 Mainframe Slang terms
https://www.garlic.com/~lynn/2010q.html#30 IBM Historic computing
https://www.garlic.com/~lynn/2011.html#35 CKD DASD
https://www.garlic.com/~lynn/2011.html#61 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011e.html#1 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011g.html#59 Is the magic and romance killed by Windows (and Linux)?
https://www.garlic.com/~lynn/2011p.html#5 Why are organizations sticking with mainframes?
https://www.garlic.com/~lynn/2011p.html#32 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2012b.html#73 Tape vs DASD - Speed/time/CPU utilization
https://www.garlic.com/~lynn/2012e.html#39 A bit of IBM System 360 nostalgia

a few other past posts mentioning RPS-miss:
https://www.garlic.com/~lynn/96.html#5 360 "channels" and "multiplexers"?
https://www.garlic.com/~lynn/2001c.html#17 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
https://www.garlic.com/~lynn/2001l.html#46 MVS History (all parts)
https://www.garlic.com/~lynn/2002b.html#1 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002o.html#46 Question about hard disk scheduling algorithms
https://www.garlic.com/~lynn/2004e.html#16 Paging query - progress
https://www.garlic.com/~lynn/2004p.html#39 100% CPU is not always bad
https://www.garlic.com/~lynn/2006f.html#0 using 3390 mod-9s
https://www.garlic.com/~lynn/2007h.html#9 21st Century ISA goals?
https://www.garlic.com/~lynn/2008d.html#52 Throwaway cores
https://www.garlic.com/~lynn/2010.html#48 locate mode, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2012d.html#75 megabytes per second

--
virtualization experience starting Jan1968, online at home since Mar1970

Is it possible to hack mainframe system??

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Date: 25 Nov 2012
Subject: Is it possible to hack mainframe system??
Blog: Mainframe Exports
re:
http://lnkd.in/vJMNrd

During the congressional Madoff hearings, they had the person that had been trying unsuccessfully for a decade to get SEC to do something about Madoff (SEC's hands were finally forced when Madoff turned himself in). Congress asked the person if new regulations were needed ... he replied that while there might be need for new regulations, much more important was change to transparency and visibility (in wallstreet operation). I ran into something similar at NSCC (since merged with DTC for the current DTCC) when asked to improve the integrity of trading floor transactions (turns at a side-effect would have greatly increased transparency and visibility ... which is antithetical to wallstreet culture).

A reference to long ago and far away (I didn't learn about these guys until much later) ... gone 404 but lives on at the wayback machine:
https://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

in the period, the os/360 platforms had little or no security ... seeing incremental addons over the years.

There has been long term statistics that "insiders" have been involved in at least 70% of identity theft (especially form of "account fraud" where crooks can then use the information for fraudulent financial transactions).

In the early 80s there was a situation involving the above mentioned organizations ... they were in habit of having complete source that matched the running system ... in order to double check for various kinds of backdoors. CP67 and then VM370 not only shipped complete source ... but also shipped maintenance in source form (at least up until the OCO-wars). They asked if they could get equivalent special source that exactly matched an operational, running MVS system. The folklore is that $5M was spent by a taskforce investigating the issue before coming back that it wasn't practical to produce exact source for a specific operational MVS system

--
virtualization experience starting Jan1968, online at home since Mar1970

Random thoughts: Low power, High performance

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Random thoughts: Low power, High performance
Newsgroups: comp.arch
Date: Mon, 26 Nov 2012 08:51:36 -0500
Robert Wessel <robertwessel2@yahoo.com> writes:
At least some of that was peculiar to the I/O device in question. FBA disks, for example, needed none of that, and even modern ECKD disks do away with the need for RPS. And actually RPS was never actually needed (and not even included in the 2311/2314, IIRC), but were an optimization (applied at the wrong level, of course) to free the channel during disk rotational delays, so that other I/Os could progress on that channel.

re:
https://www.garlic.com/~lynn/2012o.html#46 Random thoughts: Low power, High performance

recent, much more detailed discussion of sectors in a.f.c.
https://www.garlic.com/~lynn/2012o.html#62 ISO documentation of IBM 3375, 3380 and 3390 track format

CKD design had search and multi-track search ... to find record that matched search argument ... it was also defined as refetching search argument from processor memory on every compare. In the corporate favorite-son batch operating system (and descendants) Multi-track search used (at least) for VTOC (master device file directory) and PDS index (a library file) ... on 3330 could tie up channel, controller, and device for 19 revolutions.

If application had pre-formated disk area in some regular format ... say with increasing record numbers ... then it could approx. calculate angular location of start of record on disk platter, then insert a "set sector" command in front of the "search" command. The "set sector" would disconnect from channel/controller and then only reconnect at the specified angular rotation ... hopefully just ahead of the desired record ... and the search command would only result in single compare. The upside it free'd shared resource channel&controller from multiple record compare and the corresponding rotation of the disk. The downside was "RPS-miss", the channel &/or controller could be busy at the time the disk attempts to reconnect ... and would have to wait a full revoluation before trying again.

This was before disk-level buffer/caches that could do transfers and not require all the shared i/o resources be synchronized with disk rotation.

ECKD was original created for calypso speed-matching buffer so that 3mbyte/sec transfer 3380/3880 could be attached to older 370s that had slower speed channels (FBA also wouldn't need any of that).

I had been told that even if I had provided fully integrated&tested FBA for the favorite son batch operating system, I still needed a $26M business case ... to cover documentation, training, education, etc; I wasn't able to use lifetime costs ... just incremental new sales ... and the claim was they were selling as many CKD as they could make ... so FBA support would just move the same amount of CKD disk sales to FBA disk sales. Note however, the difficulties of getting original ECKD (calypso) working was nearly that $26M.

Note that real CKD/ECKD disks haven't been manufactured for decades ... just simulated on industry standard fixed-block disks. Along the way, the customers have had to bear significant costs because CKD/ECKD would expose disk geometry requiring expensive software changes every time new disk was introduced. More recently they have settled on using relatively standard disk geometry for the simulated disks (to mitigate some of those costs).

--
virtualization experience starting Jan1968, online at home since Mar1970

ISO documentation of IBM 3375, 3380 and 3390 track format

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ISO documentation of IBM 3375, 3380 and 3390 track format
Newsgroups: alt.folklore.computers
Date: Mon, 26 Nov 2012 09:15:52 -0500
Peter Flass <Peter_Flass@Yahoo.com> writes:
You got labeled "not a team player"? ;-)

re:
https://www.garlic.com/~lynn/2012o.html#61 ISO documentation of IBM 3375, 3380 and 3390 track format

it was about this time that I first sponsored Boyd's briefings at IBM. I found some affinity with lots that Boyd had to say. One of references to Boyd's oft quoted To Be Or To Do:
"There are two career paths in front of you, and you have to choose which path you will follow. One path leads to promotions, titles, and positions of distinction .... The other path leads to doing things that are truly significant for the Air Force, but the rewards will quite often be a kick in the stomach because you may have to cross swords with the party line on occasion. You can't go down both paths, you have to choose. Do you want to be a man of distinction or do you want to do things that really influence the shape of the Air Force? To be or to do, that is the question." Colonel John R. Boyd, USAF 1927-1997

From the dedication of Boyd Hall, United States Air Force Weapons School, Nellis Air Force Base, Nevada. 17 September 1999


... snip ...

USAF had pretty much disowned Boyd at the time of his death ... but Marines had adopted him ... the commandant of the Marine Corps leveraging Boyd in the 1990 time-frame for a Marine Corps makeover and Boyd's effects went to the Marine Corps library at Quantico.

misc. web URLs and past posts mentioning Boyd
https://www.garlic.com/~lynn/subboyd.html

--
virtualization experience starting Jan1968, online at home since Mar1970

ISO documentation of IBM 3375, 3380 and 3390 track format

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ISO documentation of IBM 3375, 3380 and 3390 track format
Newsgroups: alt.folklore.computers
Date: Tue, 27 Nov 2012 15:18:12 -0500
scott@slp53.sl.home (Scott Lurndal) writes:
An OS crash or data corruption may result in the kernel buffer(s) being lost before being written (could be more than 4k worth).

A power loss may result in the drive cache contents being lost prior to being committed to the media.


re:
https://www.garlic.com/~lynn/2012o.html#58 ISO documentation of IBM 3375, 3380 and 3390 track format
https://www.garlic.com/~lynn/2012o.html#59 ISO documentation of IBM 3375, 3380 and 3390 track format
https://www.garlic.com/~lynn/2012o.html#60 ISO documentation of IBM 3375, 3380 and 3390 track format
https://www.garlic.com/~lynn/2012o.html#61 ISO documentation of IBM 3375, 3380 and 3390 track format
https://www.garlic.com/~lynn/2012o.html#62 ISO documentation of IBM 3375, 3380 and 3390 track format
https://www.garlic.com/~lynn/2012o.html#65 ISO documentation of IBM 3375, 3380 and 3390 track format

CMS "CDF" filesystem from mid-60s (as well as cp67 & then vm370 page operations) were logical fixed-block ... a CKD disk would be formated to desired fixed-block format.

CMS filesystem updated filesystem metadata ... updated metadata records were always written to new, unused disk records ... and then the master file directory (which pointed to all metadata) was written in single operation (either it completed and the updated metadata would be used ... or it didn't happen and the old metadata would be used). A corrupted write of MFD or metadata would be indicated by hardware errors, indicating filesystem needing restoring.

In the mid-70s, the CMS "EDF" filesystem was introduced ... which switched from 800-byte fixed block to 1k, 2k, & 4k fixed-block filesystem organization ... offered larger files and some other features. The other thing was that it provided fix for IBM CKD DASD hardware "bug". It turns out that if a power failure happened at the appropriate moment ... processor memory lost power ... but there was sufficient power to finish a write operation in progress. With the loss of the processor memory, the write operation would continue supplying all zeros ... and then write the correct error-correcting information. If this happened to be the CMS master-file-directory ... on restart ... the master-file-directory would be successfully read (with no hardware error indication) ... but lots of information could be erroneous (all zeros). To compensate for this particular failure mode, the CMS "EDF" filesystem introduced a pair of master-file-directory records and would alternate writes between the two ... with the last bytes of the record being a version number. On restart, it would read both MFD records and use the one with the latest version number (a record that had propagated zeros wouldn't be newer than the previous version number).

misc. past posts mentioning DASD, CKD, fixed-block, multi-track search
https://www.garlic.com/~lynn/submain.html#dasd

recent posts mentioning doing a paged-mapped implementation of the cp67/cms CDF filesystem ... porting it to vm370/cms and doing paged-mapped version of both CDF & EDF
https://www.garlic.com/~lynn/2012o.html#34 Regarding Time Sharing
https://www.garlic.com/~lynn/2012o.html#43 Regarding Time Sharing

references old post with page-mapped and non-page-mapped CMS filesystem operation on 3330 disks comparison
https://www.garlic.com/~lynn/2006.html#25 DCS as SWAP disk for z/Linux

old past mentioning doing page-mapped support
https://www.garlic.com/~lynn/submain.html#mmap

in the 70s, 801/risc was created internally. one of the processor features was support for what was called "database memory" (something like what is currently "transaction memory"). it was claimed to be useable for easily turning almost any application into transaction DBMS characteristics by tracking fine-grain storage alterations. one of the 801 processors was ROMP which was originally designed to be used for the Displaywritter follow-on. When that got killed, they looked around a decided to "pivot" ROMP to unix workstation market ... and hired the company that had done the AT&T unix port to ibm/pc for PC/IX ... to do one to ROMP. Later, the unix filesystem code was organized so that all filesystem metadata was in "database" memory and changes to the metadata could be tracked, and changes committed to log file. Restart after failure was to read log and recovery aix filesystem metadata to consistent state.

There was then objective to making the AIX filesystem code portable ... and so Palo Alto group went through it and turned all the implicit tracking of changes (using database memory) into explicit log calls (allowing it to be ported to platforms that lacked database memory support). However, one of the issues was that the explicit logging version turned out to be faster than the DBMS-memory implementation (when run on same, identical 801 hardware).

misc. past posts mentioning 801, romp, rios, somerset, rs/6000, power, power/pc, etc
https://www.garlic.com/~lynn/subtopic.html#801

--
virtualization experience starting Jan1968, online at home since Mar1970

kludge, was Re: ISO documentation of IBM 3375, 3380

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: kludge, was Re: ISO documentation of IBM 3375, 3380
Newsgroups: alt.folklore.computers
Date: Wed, 28 Nov 2012 11:55:07 -0500
John Levine <johnl@iecc.com> writes:
I always said it's something that works, but for the wrong reason.

CKD implemented on top of FBA would certainly qualify.

We all agree that kludge rhymes with huge, not with sludge, right?


os/360 heritage with channel programs with real addresses built by applications in application space as well as the extensive pointer-passing API paradigm ... that were carried forward into paradigm where the 1) applications ran in virtual address space (and the built channel programs had to be copied with real addresses substituted for the virtual addresses) and 2) each application ran in its own virtual address space ... but there were now numeruous called programs that resided in separate address spaces (required kludge for called applications where the passed pointer was to parameters in a different address space)

--
virtualization experience starting Jan1968, online at home since Mar1970

PC/mainframe browser(s) was Re: 360/20, was 1132 printer history

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PC/mainframe browser(s) was Re: 360/20, was 1132 printer history
Newsgroups: alt.folklore.computers
Date: Thu, 29 Nov 2012 11:23:44 -0500
jmfbahciv <See.above@aol.com> writes:
Of course. Those who read "TCP/IP" and had a need to know more details could then go to the document which does go into the details. The prof's assignments were not at that level. If any of his students were to write a spec, the spec review would slice and dice it. The only people who might benefit from his insistence of wasted word salads might be marketers.

when we were doing ha/cmp ... went through both the documentation and source code for tcp/ip ... to identify various vulnerabilities. misc. past posts mentioning ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

summer of '95, somebody from the largest online sevice flew out to the west coast and bought me a hamburger after work ... they had been having a problem for about a month with some internet-facing servers crashing ... they had all the experts in over the month trying to identify/fix problem. I mentioned that was one of the vulnerabilities identified during ha/cmp ... and gave him a quick&dirty fix that was applied later that night. I then tried to get the major vendors to fix the problem ... but nobody was interested. Part of the issue (in this case), there was an issue with sort of a gap between what different parts of the code implemented and what was specified in the documentation (and different pieces of implemented code never expecting exploit of this nature).

almost exactly a year later ... an ISP in NYC came down with the same problem and it hit the press ... then all the vendors were patting themselves on the back on how quickly they reacted.

I've mentioned this cluster scale-up meeting in Ellison's conference room (first part of jan1992)
https://www.garlic.com/~lynn/95.html#13

and old email on cluser scale-up
https://www.garlic.com/~lynn/lhwemail.html#medusa

possibly within hrs of the last email in above (end of 1992), cluster scale-up is transferred and we are told we can't work on anything with more than four processors, a couple weeks later, it is announced as IBM supercomputer for numeric and scientific only (we had been working on both commerical/dbms as well as numeric/scientific concurrently)
https://www.garlic.com/~lynn/2001n.html#6000clusters1

events are big motivation in decided to leave. Also, a couple of the other people in the ellison meeting depart and show up at a small client/server startup responsible for something called the "commerce server". we are brought in as consultants because they want to do payment transactions on the server; the startup had also invented this technology called "SSL" they wanted to use; the result is now frequently called "electronic commerce".

I have complete authority of the implementation of something called the "payment gateway" ... sits on the internet and acts as interface between commerce servers and the payment networks ... some past posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

as well as the interface between the webservers and the payment gateway. from ha/cmp days we had built redundant gateways with connections into different places in the internet backbones (to minimize outages). For availability we had used ip routing updates as well as multiple-A records (DNS name mapping to multiple A-records, i.e. multiple IP addresses). However, while deploying payment gateways, the internet backbone goes through transission to hierarchical routing ... eliminating being able to use ip-routing updates for availability ... having to fallback to only multiple A-records for availability.

I then try and convince the browser people ... giving class on multiple-A records ... but they claim it is too advanced. I then show them that it is standard in 4.3 reno/tahoe client code ... they still say it is too complex. It as if it wasn't covered in college tcp/ip text books ... they weren't going to be able to do it.

An issue at the time was an early "commerce server" adopter was national retail sporting goods company that advertised on national sunday football games ... and were expecting uptick in activity around the game. A problem, was (at the time) some number of ISPs still take service offline during the day on sunday for maintenance. Even though the "commerce server" had multiple links into the internet backbone for redundancy/availability, the browsers would still only be trying the first ip-address in the DNS list ... if that failed ... it wouldn't try any of the alternative ip-addresses ... just give up.

--
virtualization experience starting Jan1968, online at home since Mar1970

Can Open Source Ratings Break the Ratings Agency Oligopoly?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Date: 30 Nov 2012
Subject: Can Open Source Ratings Break the Ratings Agency Oligopoly?
Blog: Financial Crime Risk, Fraud and Security
also Google+
https://plus.google.com/u/0/102794881687002297268/posts/KDSUEX5Nzq2

Can Open Source Ratings Break the Ratings Agency Oligopoly? www.nakedcapitalism.com/2012/11/can-open-source-ratings-break-the-ratings-agency-oligopoly.html

from above:
One of the causes of the financial that should have been relatively easy to fix was the over-reliance on ratings agencies. They wield considerable power, suffer from poor incentives, in particular, that they can do terrible work yet are at no risk of being fired thanks to their oligopoly position, and are seldom exposed to liability (they have bizarrely been able to argue that their research is journalistic opinion, which gives them a First Amendment exemption)

... snip ...

Testimony In the congressional hearings into the pivotal role that the rating agencies played in the financial mess, was that the rating agencies were paid for triple-A ratings even when both the rating agencies and the sellers knew that the toxic CDOs weren't worth triple-A. TV news commentary during the hearings was that the rating agencies would likely avoid federal prosecution with the threat of downgrading US gov. ratings.

Securitized mortgages had been used during the S&L crisis to obfuscate fraudulent mortgages. At the end of the last century we were asked to look at improving the integrity of mortgage supporting documents (as countermeasure to fraud) ... however paid-for triple-A ratings trumps supporting documents ... leading to no-documentation (& no-down) mortgages and more recently the robo-signing scandal to create documents after the fact (and with no-documentation there is no longer an issue of documentation integrity)

with triple-A ratings ... could do $27T during the mess:
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
https://www.bloomberg.com/news/articles/2008-10-27/evil-wall-street-exports-boomed-with-fools-born-to-buy-debt

also with guaranteed triple-A, loan originators no longer needed to care about borrower's qualification (/documentation) or loan quality. Street was motivated by enormous fees and commissions on the $27T ... and then being able to turn around and take bets on their failure.

another recent item on the subject:

A 10 Step Program to Replace Legacy Credit Ratings with Modern Default Probabilities for Counter-Party and Credit Risk Assessment
http://www.kamakuraco.com/Blog/tabid/231/EntryId/451/A-10-Step-Program-to-Replace-Legacy-Credit-Ratings-with-Modern-Default-Probabilities-for-Counter-Party-and-Credit-Risk-Assessment.aspx

recent past posts mentioning the $27T:
https://www.garlic.com/~lynn/2012.html#21 Zombie Banks
https://www.garlic.com/~lynn/2012.html#32 Wall Street Bonuses May Reach Lowest Level in 3 Years
https://www.garlic.com/~lynn/2012b.html#19 "Buffett Tax" and truth in numbers
https://www.garlic.com/~lynn/2012b.html#65 Why Wall Street Should Stop Whining
https://www.garlic.com/~lynn/2012b.html#95 Bank of America Fined $1 Billion for Mortgage Fraud
https://www.garlic.com/~lynn/2012c.html#30 US real-estate has lost $7T in value
https://www.garlic.com/~lynn/2012c.html#31 US real-estate has lost $7T in value
https://www.garlic.com/~lynn/2012c.html#32 US real-estate has lost $7T in value
https://www.garlic.com/~lynn/2012c.html#36 US real-estate has lost $7T in value
https://www.garlic.com/~lynn/2012c.html#37 US real-estate has lost $7T in value
https://www.garlic.com/~lynn/2012c.html#38 The Death of MERS
https://www.garlic.com/~lynn/2012c.html#45 Fannie, Freddie Charge Taxpayers For Legal Bills
https://www.garlic.com/~lynn/2012c.html#46 PC industry is heading for more change
https://www.garlic.com/~lynn/2012c.html#54 PC industry is heading for more change
https://www.garlic.com/~lynn/2012c.html#55 PC industry is heading for more change
https://www.garlic.com/~lynn/2012d.html#32 PC industry is heading for more change
https://www.garlic.com/~lynn/2012d.html#42 China's J-20 Stealth Fighter Is Already Doing A Whole Lot More Than Anyone Expected
https://www.garlic.com/~lynn/2012e.html#23 Are mothers naturally better at OODA because they always have the Win in mind?
https://www.garlic.com/~lynn/2012e.html#40 Who Increased the Debt?
https://www.garlic.com/~lynn/2012e.html#42 Who Increased the Debt?
https://www.garlic.com/~lynn/2012e.html#58 Word Length
https://www.garlic.com/~lynn/2012f.html#31 Rome speaks to us. Their example can inspire us to avoid their fate
https://www.garlic.com/~lynn/2012f.html#63 One maths formula and the financial crash
https://www.garlic.com/~lynn/2012f.html#66 Predator GE: We Bring Bad Things to Life
https://www.garlic.com/~lynn/2012f.html#69 Freefall: America, Free Markets, and the Sinking of the World Economy
https://www.garlic.com/~lynn/2012f.html#75 Fed Report: Mortgage Mess NOT an Inside Job
https://www.garlic.com/~lynn/2012f.html#80 The Failure of Central Planning
https://www.garlic.com/~lynn/2012f.html#87 How do you feel about the fact that India has more employees than US?
https://www.garlic.com/~lynn/2012g.html#6 Adult Supervision
https://www.garlic.com/~lynn/2012g.html#7 Adult Supervision
https://www.garlic.com/~lynn/2012g.html#8 Adult Supervision
https://www.garlic.com/~lynn/2012g.html#20 Psychology Of Fraud: Why Good People Do Bad Things
https://www.garlic.com/~lynn/2012g.html#22 Psychology Of Fraud: Why Good People Do Bad Things
https://www.garlic.com/~lynn/2012g.html#28 REPEAL OF GLASS-STEAGALL DID NOT CAUSE THE FINANCIAL CRISIS - WHAT DO YOU THINK?
https://www.garlic.com/~lynn/2012g.html#71 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012g.html#76 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012h.html#26 US economic update. Everything that follows is a result of what you see here
https://www.garlic.com/~lynn/2012h.html#32 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012h.html#63 Is this Boyd's fundamental postulate, 'to improve our capacity for independent action'?
https://www.garlic.com/~lynn/2012h.html#75 Interesting News Article
https://www.garlic.com/~lynn/2012i.html#13 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012i.html#14 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012i.html#51 Is this Boyd's fundamental postulate, 'to improve our capacity for independent action'? thoughts please
https://www.garlic.com/~lynn/2012j.html#28 Why Asian companies struggle to manage global workers
https://www.garlic.com/~lynn/2012j.html#65 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012k.html#43 Core characteristics of resilience
https://www.garlic.com/~lynn/2012k.html#75 What's the bigger risk, retiring too soon, or too late?
https://www.garlic.com/~lynn/2012l.html#64 Singer Cartons of Punch Cards
https://www.garlic.com/~lynn/2012m.html#56 General Mills computer
https://www.garlic.com/~lynn/2012n.html#6 General Mills computer
https://www.garlic.com/~lynn/2012n.html#12 Why Auditors Fail To Detect Frauds?
https://www.garlic.com/~lynn/2012o.html#7 Beyond the 10,000 Hour Rule
https://www.garlic.com/~lynn/2012o.html#26 Why bankers rule the world

--
virtualization experience starting Jan1968, online at home since Mar1970

bubble memory

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: bubble memory
Newsgroups: alt.folklore.computers
Date: Fri, 30 Nov 2012 20:39:14 -0500
Alan Bowler <atbowler@thinkage.ca> writes:
The hope was that bubble memory could achieve high enough density to replace disks, and for a while it did look to be on track for that. At the time a disk unit was around the size of washing machine (okay 2/3s of that) and held about around 100 megabytes. Bubble held the promise of high reliability (no head crashes) and likely lower power. People expected it to piggy back on the fabrication techniques that were making SRAM and DRAM big enough to display core memory.

Then winchester drives appeared and disk density took off.


thin-film flying heads were smaller and closer to platter surface ... that met that size of magnetic recording could be smaller and closer together ... both bits per track as well as reducing spacing widths between tracks.

3330-1 was 100mbytes and 3330-11 was 200mbytes
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3330.html

3340 ... "feature smaller, lighter read/write head that could ride closer to the disk surface -- on an air film" ... came in 35megabyte & 70 megabyte modules
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3340.html
originally was to be two 30mbytes modules ... aka 30-30 or winchester
https://en.wikipedia.org/wiki/History_of_IBM_magnetic_disk_drives

3370 introduced thin-film in 1979, 1.8mbyte/sec transfer
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3370.html

then 3380 in Oct1981 ... as well a 3mbyte/sec transfer
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3380.html

originally had 20 track widths spacing between tracks, 885 tracks ... then 3380E (1985) doubled the number of tracks to 1770 by cutting spacing between tracks in half, and then 3380K (1987) cut the spacing between tracks again, increasing number of tracks to 2655.

recent post mentioning "air bearing" simulation for floating heads late 70s
https://www.garlic.com/~lynn/2012o.html#59 ISO documentation of IBM 3375, 3380 and 3390 track format

3390, 1989
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3390.html

other posts in 3375/3380/3390 thread:
https://www.garlic.com/~lynn/2012o.html#58 ISO documentation of IBM 3375, 3380 and 3390 track format
https://www.garlic.com/~lynn/2012o.html#60 ISO documentation of IBM 3375, 3380 and 3390 track format
https://www.garlic.com/~lynn/2012o.html#61 ISO documentation of IBM 3375, 3380 and 3390 track format
https://www.garlic.com/~lynn/2012o.html#62 ISO documentation of IBM 3375, 3380 and 3390 track format
https://www.garlic.com/~lynn/2012o.html#65 ISO documentation of IBM 3375, 3380 and 3390 track format
https://www.garlic.com/~lynn/2012o.html#66 ISO documentation of IBM 3375, 3380 and 3390 track format

--
virtualization experience starting Jan1968, online at home since Mar1970

Is orientation always because what has been observed? What are your 'direct' experiences?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Date: 01 Dec 2012
Subject: Is orientation always because what has been observed? What are your 'direct' experiences?
Blog: Boyd Strategy
re:
http://lnkd.in/U7uAY8

chicken&egg ... some amount of babies brains are supposedly prewired ... from eons of evolution and natural selection ... but immediately start to change based on experience. is brain at birth considered the starting point ... or the product of long eons of evolution.

i've frequently commented about boyd's emphasis on constantly observing from every possible facet as an antidote to preconceived notions (the OODA are running constantly, not serially, sequential, the loop can also be used to represent circling a subject to obtain every possible view). at the micro-level ... there is an existing orientation at the moment a new situation is encountered ... but at the macro-level, that orientation would be the product of a long sequence of experiences (including heredity and evolution).

tangent would be similar experiences resulting in different orientations as well a radically different experiences resulting in similar orientations (for instance the difference between "To Be" and "To Do" orientations)

Note that the pivot basically says that the game hasn't changed ... just the opponents.

there is this
http://www.phibetaiota.net/2012/11/penguin-trans-pacific-partnership-corporate-legalized-theft-in-3-parts/
this
http://www.phibetaiota.net/2011/09/chuck-spinney-bin-laden-perpetual-war-total-cost/
this
http://chuckspinney.blogspot.com/p/domestic-roots-of-perpetual-war.html
and more recently this
http://chuckspinney.blogspot.com/2012/11/something-to-think-about-as-we-heave.html

to some extent the "pivot" theme may be obfuscation and misdirection away from the game needs to be changed.

in another recent fora about agile/adapting ... repeated references that I've periodically cited here ... i.e. "inclusive/exclusive" (... as opposed to conservative/liberal; equality/inequality; where exclusive/inequality is much more likely to be preoccupied with preserving the status quo) ... significant overlap with Boyd's To Be or To Do.

Stiglitz's Freefall: America, Free Markets, and the Sinking of the World Economy
https://www.amazon.com/Freefall-America-Markets-Sinking-ebook/dp/B0035YDM9E
pg.271
Standard economic theory (the neoclassical model discussed earlier in this chapter) has had little to say about innovation, even though most of the increases in U.S. standards of living in the past hundred years have come from technical progress. As I noted earlier, just as "information" was outside the old models, so too was innovation.

... snip ...

Both Webb's "Born Fighting: How the Scots-Irish Shaped America"
https://www.amazon.com/Born-Fighting-Scots-Irish-America-ebook/dp/B000FCKGTS
and "Why Nations Fail"
https://www.amazon.com/Why-Nations-Fail-Prosperity-ebook/dp/B0058Z4NR8

spend quite a bit of time on inclusive (or bottom-up) in contrast to exclusive (English being top-down, exclusive society).

Fiske (history lectures from the 1880s, my wife's father got a set for some distinction at west point) was that the Scottish influence from mid-Atlantic states carried the US form of government, if the English settlers had prevailed, the form of government would have been quite different. also
http://marginalrevolution.com/marginalrevolution/2011/12/teachers-dont-like-creative-students.html

other reference:

Inequality and Investment Bubbles: A Clearer Link Is Established
http://www.sciencedaily.com/releases/2012/04/120419153917.htm

"Why Nations Fail" discusses Spanish conquest of the new world which was plunder and enslave the local population (and keeping them at subsistence level). It contrasts it with the English attempting to emulate a similar strategy early 1600s for Jamestown in Virginia. Jamestown almost starved the first two years because they originally sent over skills oriented to plundering and enslaving the local population (emulating the Spanish model by the "Virginia Corporation" given the crown charter). Virginia, Maryland and Carolina then changed the strategy to enslaving large numbers of Englishman that had no rights; somewhat feudal, the "leet-men" had no rights, pg27:
The clauses of the Fundamental Constitutions laid out a rigid social structure. At the bottom were the "leet-men," with clause 23 noting, "All the children of leet-men shall be leet-men, and so to all generations."

...

part of the previous mentioned reference to "Born Fighting: How the Scots-Irish Shaped America", "Why Nations Fail" and Fiske's history lectures (from 1880s)

there is also Diamond's "Guns, Germs and Steel"
https://www.amazon.com/Guns-Germs-Steel-Societies-ebook/dp/B000VDUWMC
and "Collapse: How Societies Choose to Fail or Succeed"
https://www.amazon.com/Collapse-Societies-Succeed-Revised-ebook/dp/B004H0M8EA

they all touch on exclusive societies that frequently get totally focused on protecting the status quo ... against inclusive societies that tend to be the sources of creativity, invention, and innovation.

There is danger that "simplifying members of society" ... gets contorted into part of protecting the status quo of an exclusive society. The inclusive theme behind creativity, invention, and innovation ... is increasing the capability of each member of society.

more take on teachers don't like creative students

Google's Thrun: 'We're really dumbing down our children'
http://blog.foreignpolicy.com/posts/2012/11/29/googles_thrun_were_really_dumbing_down_our_children

another take on british inclusive/exclusive (equality/inequality), Seven Pillars of Wisdom, pg519/loc14754-55
The manner of the British officers toward their men struck horror into my bodyguard, who had never seen personal inequality before.

... snip ...

and related to item about English initially attempted to emulate spain/portugal in the Jamestown settlement ... English partially saved from themselves because the Indians in the area of jamestown weren't conducive to enslaving (but then resorting to other English as slaves):

The Influence of Sea Power Upon History, 1660-1783 (A. T. Mahan), 1890,
http://www.gutenberg.org/ebooks/13529
loc1287-91:
The mines of Brazil were the ruin of Portugal, as those of Mexico and Peru had been of Spain; all manufactures fell into insane contempt; ere long the English supplied the Portuguese not only with clothes, but with all merchandise, all commodities, even to salt-fish and grain.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

AMC proposes 1980s computer TV series "Halt & Catch Fire"

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: AMC proposes 1980s computer TV series "Halt & Catch Fire"
Newsgroups: alt.folklore.computers
Date: Sat, 01 Dec 2012 23:49:13 -0500
Michael Black <et472@ncf.ca> writes:
I thought that was the mainframe that had some useful undcomunented op-cedes. There was definitely one that someone had found one or two on, and it was useful, so the "bug" had to remian. Or maybe it was that someone designed a clone of the 360 and had to copy the bug too?

The fact that such things had been found on a mainframe (and I don't know if that was acceident or a deliberate search) caused people to look when the microprocessors came along.


things were fairly strict about all processors toeing the line ... there was (privileged) "diagnose" instruction ... that required being in supervisor state ... and was defined as processor specific ... lots of microcode not part of 360/370 ... different microcode on different models being selected by various diagnose parameters.

common was emulators ... which when activated might also activate new instructions. recent post
https://www.garlic.com/~lynn/2011j.html#46 Suffix of 64 bit instsruction

... diagnose instruction displacement x"3cc" selects emulator function, turning on/off special emulator instructions (x'99' opcode).

Getting into supervisor/privileged state and executing diagnose instruction with all possible parameters ... might find all sorts of model specific microprograms.

there are also "RPQ" instructions ... like the CPS (conversational programming system) "assists" for 360/50 ... recent post
https://www.garlic.com/~lynn/2012n.html#26 Is there a correspondence between 64-bit IBM mainframes and PoOps editions levels?

above references description (on bitsavers) done by Allen-Babcock (under contract to IBM) ... including a couple list search operations.

There was also a "search list" instruction "RPQ" done by Lincoln Labs that was installed on some number of 360/67s (various releases of cp67 used the instruction ... and if it wasn't installed ... would simulate its operation in software, first taking an invalid opcode program exception)

leading up to virtual memory announcement for 370 ... the various 370 models had to develop retrofit hardware for 370s already in customer shops. 370/165 was running into lots of trouble implementing the full virtual memory architecture. after some delays, it was decided to eliminate several features from full 370 virtual memory implementation to buy six months on the schedule for retrofitting virtual memory hardware to 370/165. This required that most of the other models that had already implemented the full specification ... had to delete all the features dropped for the 370/165. This also caused problems for various software that had already done implementations assuming the full 370 virtual memory specification.

--
virtualization experience starting Jan1968, online at home since Mar1970

These Two Charts Show How The Priorities Of US Companies Have Gotten Screwed Up

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Date: 02 Dec 2012
Subject: These Two Charts Show How The Priorities Of US Companies Have Gotten Screwed Up
Blog: Facebook
These Two Charts Show How The Priorities Of US Companies Have Gotten Screwed Up
http://www.businessinsider.com/profits-versus-wages

aka

1) Corporate profit margins just hit an all-time high, 2) Wages as a percent of the economy are at an all-time low

while it overlays recessions in gray ... it doesn't overlay the corresponding unemployment rate
http://data.bls.gov/timeseries/LNU04000000?years_option=all_years&periods_option=specific_periods&periods=Annual+Data

83&2010 had 9.6 unemployment rate, 2000, 2006, & 2007 had 4.6 unemployment rate, during 2012 recovery so far has approx 7.5-8.5 range ... however the overall wages (as percent of economy) continues to show a downward trend since its high around 1974.

There was some reports that the last decade during periods of historical low unemployment ... downward pressure on wages was done using undocumented workers (would tend to show as wages but not likely to show as unemployed).

some of it is long term structural and some of it is the bubble. there were $27T in triple-A rated toxic CDOs done during the bubble (2001-2008)
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
https://www.bloomberg.com/news/articles/2008-10-27/evil-wall-street-exports-boomed-with-fools-born-to-buy-debt

... with wallstreet taking significant percentage in fees&commissions (possibly on the order of $4T-$5T sucked out); the claim is that financial services sector tripled in size (as percent of GDP ... which an extra $4T-$5T goes a long ways accounting for) during the bubble. CBO report has last decade, (federal) revenue was cut by $6T (compared to baseline) and spending increased by $6T ... for a $12T budget gap (compared to baseline, baseline had all federal debt retired by 2010) ... most of this occurring after congress allowed fiscal responsibility act to expire in 2002 (which required appropriations match revenue). Middle of last decade, US Comptroller General would include in speeches that nobody in congress was capable of middle school arithmetic (based on how they were savaging the budget) ... some amount of the budget momentum from last decade still continues. Of the $6T increase in spending (over baseline last decade), $2+T was for DOD, $1+T appropriated for the wars ... and apparently isn't able to show what the other $1+T went for. Total (including long-term medical care) cost of the wars is now expected to exceed $5T.

oh and this a different inflation adjusted view of wages and the rise in personal debt:
http://www.nytimes.com/imagepages/2011/09/04/opinion/04reich-graphic.html

from this article:
http://www.nytimes.com/2011/09/04/opinion/sunday/jobs-will-follow-a-strengthening-of-the-middle-class.html

Sources: Robert B. Reich, University of California, Berkeley; "The State of Working America" by the Economic Policy Institute; Thomas Piketty, Paris School of Economics, and Emmanuel Saez, University of California, Berkeley; Census Bureau; Bureau of Labor Statistics; Federal Reserve. The baby boomers (four times larger than the previous generation) ... born 46-64 ... would have had peak earning years approx. 1986-2024 ... but things show that inflation adjusted wages went nearly flat 79-80 time-frame

There was something like $100m for head of GSE ... however some folklore was that they had more lobbyists than employees ... basically $20k/annum retainer for anybody that had in anyway been related to congress ... but still less than couple billion ... more of misdirection and obfuscation when considering several trillion (aka sort of like FAA assigning blame and only focusing on the .01%).

GSEs were to buy mortgages directly from the loan originators ... as mortgages ... each mortgage had to have documentation and meet various standards. Loan originators then found they could securitize them, pay rating agencies for triple-A rating and sell them through wallstreet. With guaranteed triple-A ... they no longer needed standards, documentation, or needed to care about loan quality or borrower's qualifications ($27T in triple-A rated toxic CDOs).
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
https://www.bloomberg.com/news/articles/2008-10-27/evil-wall-street-exports-boomed-with-fools-born-to-buy-debt

Just the four largest too-big-to-fail were still carrying $5.2T in toxic CDOs "off-book" the end of 2008 (at the time going for 22cents on the dollar, if they had been forced to bring back on the books, they would have all been declared insolvent and forced to liquidate).
Bank's Hidden Junk Menaces $1 Trillion Purge
http://www.bloomberg.com/apps/news?pid=newsarchive&sid=akv_p6LBNIdw&refer=home

As part of the (non-TARP, behind the scenes) bailout ... some amount of this re-appears on GSE books. Only $700B had been appropriated for TARP for the purchase of toxic assets ... the problem was more like an order of magnitude larger ... and TARP funds got used publicly for other purposes (and the "real" bailout of toxic assets was done in lots of other ways). During the bubble, GSEs were loosing mortgage market share as $27T in securitized mortgage (triple-A rated toxic CDOs) sold through wallstreet.

Loan market had been profit off the monthly payments. Securitized morgages had been used in S&L crisis for fraud. In 1999, we had been asked to look at improving the integrity of the underlying supporting documents (as countermeasure to fraud). Testimony in the congressional hearings into the pivotal role that rating agencies played in the mess was that they were paid for triple-A ratings when both the sellers and the rating agencies knew they weren't worth triple-A. The resulting triple-A rated toxic CDO pipeline through wallstreet ($27T) turned the business into fees&commissions on the volume through the pipeline; The triple-A rating trumped documentation, enabling no-documentation loans (and eliminating any consideration about loan quality or borrower's qualifications). Also with no-documentation ... no issue about document integrity ... also shows up in the robo-signing scandal (generating fraudulent documents after the fact). The volume being handled by GSE was way less than the toxic CDO wallstreet pipeline.

recent rating agency discussion over in google+
https://plus.google.com/u/0/102794881687002297268/posts/KDSUEX5Nzq2
and in (linkedin) Financial Crime, Risk, and Fraud:
https://www.garlic.com/~lynn/2012o.html#69

... also Can Open Source Ratings Break the Ratings Agency Oligopoly?
http://www.nakedcapitalism.com/2012/11/can-open-source-ratings-break-the-ratings-agency-oligopoly.html

and A 10 Step Program to Replace Legacy Credit Ratings with Modern Default Probabilities for Counter-Party and Credit Risk Assessment
http://www.kamakuraco.com/Blog/tabid/231/EntryId/451/A-10-Step-Program-to-Replace-Legacy-Credit-Ratings-with-Modern-Default-Probabilities-for-Counter-Party-and-Credit-Risk-Assessment.aspx

in the S&L mess there were fraudulent assets and fraudulent documents. In the triple-A rated toxic CDOs they just got people to walk in off the street and sign ... they didn't have to care whether it was fraud or not (there are also news stories from the period about drug cartels using it to launder money ... with gov. doing everything possible to keep too-big-to-fail in business ... you don't hear much about it ... although there were a few stories about too-big-to-jail; somewhat akin to the robo-signing scandal). The $27T total during the mess was about twice annual GDP (see referenced bloomberg articles for details).

The other thing wallstreet did to further enhance the skim on the triple-A rated toxic CDO pipeline was purposefully construct toxic CDOs to fail and then take CDS bets that they would fail (again nobody has gone to jail; this is akin to your neighbors all taking fire insurance on your house ... and then making sure it burns down). Old reference that summer of 2011, CDS betting was at $700T:
http://www.atimes.com/atimes/Global_Economy/ML14Dj02.html

... there was more recent reference that by summer of 2012 it was at $800T. Again, the market had changed, the issue was no longer about the value of the assets, it was purely about the volume of the transactions through the triple-A rated toxic CDO pipeline ... with fees&commission skimming (and the CDS bets)

First referenced bloomberg article was that $27T total done during the mess. The other bloomberg article was that just the four largest TBTF were still holding $5.2T at the end of 2008 ... other TBTF brings it to ?? guess $7T-$10T. A big reason for paying for the triple-A rating was to open the market to institutions that are restricted to only dealing in "safe" instruments ... like the large institutional retirement funds. One of the reasons that they are trying to do this all behind the scenes is that don't want to talk about how much of the toxic CDOs were being held by such institutions. So still outstanding is probably less than the full $27T. Fall of 2008 there was $60B in toxic assets sold at 22cents on the dollar (pegged the "mark-to-market"). The $27T drove the real-estate bubble during the period ... so most of the $27T involved real-estate that ballooned and then collapsed ... around 30%-40% (as opposed to all real-estate). That puts the still outstanding triple-A rated toxic CDOs (less than $27T and probably more than double $5.2T) somewhere between 22% & 70% of face value.

Other players helping drive the mess were real-estate speculators ... in parts of the country with 20-30% inflation (driven by the $27T & speculators), no-documentation, no-down 1% interest only payment ARMs ... they could make 2000% ROI flipping before adjust (with guaranteed triple-A rating that opened market to everything they could generate, loan originators no longer cared who or why loan was being done).

recent posts mentioning $27T triple-A rated toxic CDO wallstreet pipeline &/or $5.2T still held by just the four largest TBTF at end of 2008:
https://www.garlic.com/~lynn/2012.html#21 Zombie Banks
https://www.garlic.com/~lynn/2012.html#32 Wall Street Bonuses May Reach Lowest Level in 3 Years
https://www.garlic.com/~lynn/2012b.html#19 "Buffett Tax" and truth in numbers
https://www.garlic.com/~lynn/2012b.html#65 Why Wall Street Should Stop Whining
https://www.garlic.com/~lynn/2012b.html#95 Bank of America Fined $1 Billion for Mortgage Fraud
https://www.garlic.com/~lynn/2012c.html#30 US real-estate has lost $7T in value
https://www.garlic.com/~lynn/2012c.html#31 US real-estate has lost $7T in value
https://www.garlic.com/~lynn/2012c.html#32 US real-estate has lost $7T in value
https://www.garlic.com/~lynn/2012c.html#36 US real-estate has lost $7T in value
https://www.garlic.com/~lynn/2012c.html#37 US real-estate has lost $7T in value
https://www.garlic.com/~lynn/2012c.html#38 The Death of MERS
https://www.garlic.com/~lynn/2012c.html#45 Fannie, Freddie Charge Taxpayers For Legal Bills
https://www.garlic.com/~lynn/2012c.html#46 PC industry is heading for more change
https://www.garlic.com/~lynn/2012c.html#52 PC industry is heading for more change
https://www.garlic.com/~lynn/2012c.html#54 PC industry is heading for more change
https://www.garlic.com/~lynn/2012c.html#55 PC industry is heading for more change
https://www.garlic.com/~lynn/2012d.html#5 PC industry is heading for more change
https://www.garlic.com/~lynn/2012d.html#32 PC industry is heading for more change
https://www.garlic.com/~lynn/2012d.html#42 China's J-20 Stealth Fighter Is Already Doing A Whole Lot More Than Anyone Expected
https://www.garlic.com/~lynn/2012e.html#23 Are mothers naturally better at OODA because they always have the Win in mind?
https://www.garlic.com/~lynn/2012e.html#40 Who Increased the Debt?
https://www.garlic.com/~lynn/2012e.html#42 Who Increased the Debt?
https://www.garlic.com/~lynn/2012e.html#58 Word Length
https://www.garlic.com/~lynn/2012f.html#31 Rome speaks to us. Their example can inspire us to avoid their fate
https://www.garlic.com/~lynn/2012f.html#63 One maths formula and the financial crash
https://www.garlic.com/~lynn/2012f.html#66 Predator GE: We Bring Bad Things to Life
https://www.garlic.com/~lynn/2012f.html#69 Freefall: America, Free Markets, and the Sinking of the World Economy
https://www.garlic.com/~lynn/2012f.html#75 Fed Report: Mortgage Mess NOT an Inside Job
https://www.garlic.com/~lynn/2012f.html#80 The Failure of Central Planning
https://www.garlic.com/~lynn/2012f.html#87 How do you feel about the fact that India has more employees than US?
https://www.garlic.com/~lynn/2012g.html#6 Adult Supervision
https://www.garlic.com/~lynn/2012g.html#7 Adult Supervision
https://www.garlic.com/~lynn/2012g.html#8 Adult Supervision
https://www.garlic.com/~lynn/2012g.html#20 Psychology Of Fraud: Why Good People Do Bad Things
https://www.garlic.com/~lynn/2012g.html#22 Psychology Of Fraud: Why Good People Do Bad Things
https://www.garlic.com/~lynn/2012g.html#28 REPEAL OF GLASS-STEAGALL DID NOT CAUSE THE FINANCIAL CRISIS - WHAT DO YOU THINK?
https://www.garlic.com/~lynn/2012g.html#70 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012g.html#71 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012g.html#76 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012h.html#25 US economic update. Everything that follows is a result of what you see here
https://www.garlic.com/~lynn/2012h.html#26 US economic update. Everything that follows is a result of what you see here
https://www.garlic.com/~lynn/2012h.html#32 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012h.html#63 Is this Boyd's fundamental postulate, 'to improve our capacity for independent action'?
https://www.garlic.com/~lynn/2012h.html#75 Interesting News Article
https://www.garlic.com/~lynn/2012i.html#13 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012i.html#14 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012i.html#29 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012i.html#51 Is this Boyd's fundamental postulate, 'to improve our capacity for independent action'? thoughts please
https://www.garlic.com/~lynn/2012j.html#28 Why Asian companies struggle to manage global workers
https://www.garlic.com/~lynn/2012j.html#65 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012k.html#43 Core characteristics of resilience
https://www.garlic.com/~lynn/2012k.html#75 What's the bigger risk, retiring too soon, or too late?
https://www.garlic.com/~lynn/2012l.html#64 Singer Cartons of Punch Cards
https://www.garlic.com/~lynn/2012m.html#50 General Mills computer
https://www.garlic.com/~lynn/2012m.html#56 General Mills computer
https://www.garlic.com/~lynn/2012n.html#6 General Mills computer
https://www.garlic.com/~lynn/2012n.html#12 Why Auditors Fail To Detect Frauds?
https://www.garlic.com/~lynn/2012o.html#7 Beyond the 10,000 Hour Rule
https://www.garlic.com/~lynn/2012o.html#26 Why bankers rule the world

--
virtualization experience starting Jan1968, online at home since Mar1970


previous, next, index - home