List of Archived Posts

2005 Newsgroup Postings (06/12 - 06/22)

IBM/Watson autobiography--thoughts on?
More on garbage
Ancient history
Public disclosure of discovered vulnerabilities
IBM/Watson autobiography--thoughts on?
IBM/Watson autobiography--thoughts on?
IBM/Watson autobiography--thoughts on?
Firefox Lite, Mozilla Lite, Thunderbird Lite -- where to find
virtual 360/67 support in cp67
3705 ID Tag
IBM/Watson autobiography--thoughts on?
IBM/Watson autobiography--thoughts on?
IBM/Watson autobiography--thoughts on?
IBM/Watson autobiography--thoughts on?
virtual 360/67 support in cp67
3705
More on garbage
More on garbage collection
Question about Dungeon game on the PDP
The 8008 (was: Blinky lights WAS: The SR-71 Blackbird was designed ENTIRELYwith slide rules)
IBM/Watson autobiography--thoughts on?
IBM/Watson autobiography--thoughts on?
Where should the type information be?
More on garbage
Ancient history
The 8008
More on garbage
IBM/Watson autobiography--thoughts on?
IBM/Watson autobiography--thoughts on?
More Phishing scams, still no SSL being used
Public disclosure of discovered vulnerabilities
Banks
The 8008 (was: Blinky lights WAS: The SR-71 Blackbird was designed ENTIRELYwith slide rules)
How much RAM is 64K (36-bit) words of Core Memory?
How much RAM is 64K (36-bit) words of Core Memory?
Determining processor status without IPIs
Determining processor status without IPIs
The 8008 (was: Blinky lights WAS: The SR-71 Blackbird was designed ENTIRELYwith slide rules)
Determining processor status without IPIs
Determining processor status without IPIs
IBM/Watson autobiography--thoughts on?
Title screen for HLA Adventure? Need help designing one
wheeler scheduler and hpo
Determining processor status without IPIs
Book on computer architecture for beginners
Performance and Capacity Planning
Performance and Capacity Planning
Performance and Capacity Planning
The 8008 (was: Blinky lights WAS: The SR-71 Blackbird was designed ENTIRELYwith slide rules)
Determining processor status without IPIs
Performance and Capacity Planning
Performance and Capacity Planning
Performance and Capacity Planning
Performance and Capacity Planning
Determining processor status without IPIs
Encryption Everywhere? (Was: Re: Ho boy! Another big one!)
Encryption Everywhere? (Was: Re: Ho boy! Another big one!)
Secure Banking
Book on computer architecture for beginners
Book on computer architecture for beginners
The Worth of Verisign's Brand

IBM/Watson autobiography--thoughts on?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM/Watson autobiography--thoughts on?
Newsgroups: alt.folklore.computers
Date: Sun, 12 Jun 2005 10:59:37 -0600
hancock4 writes:
Secondly, I never understood why competitors never could duplicate the level of customer support that IBM provided, esp when some of them were large and successful companies.

i was told a story about one of the seven dwarfs (I think rca) giving testimony at the fed. gov. trial involving ibm. supposedly they testified that in the late 50s every computer company realized that the single most important thing to be successful in the business was to have a compatible computer line (businesses were investing large amounts in software applications ... but it was also a period of significant corporate growth ... and they didn't need to scrape all their software everything they upgraded a machine). the observation was that every other computer company failed at this ... except ibm. that local plant/product managers were always optimizing the machine architecture for the technology their specific product was using. IBM supposedly had the only corporate leadership that forced all the different plant/product managers to toe the (360) line.

a lot of the 360s were microcoded machines ... regardless of the native hardware engine characteristics ... the microcode layer hid all of that and provided a uniform 360 architecture to software. the microcode emulation could provide a ten-to-one performance degradation between the 360 delivered thruput and the native hardware engine thruput.

one might claim that the care that went into addressing solutions for customer requirements ... went far beyond just having onsite handholding. another characteristic would be realizing that software development was the primary bottleneck (in the period) ... and hardware upgrades for growing corporations could represent a significantly bigger macro-problem (if software conversions were required) than some of the more day-to-day micro-issues.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

More on garbage

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: More on garbage
Newsgroups: sci.crypt,comp.arch
Date: Sun, 12 Jun 2005 13:26:03 -0600
"Jon A. Solworth" writes:
On the contrary, the systems in place at Bering did not implement standard and prudent seperation of duty. If they had done so, Lessing would have been almost certainly unable to commit a fraud of this magnitude.

The purpose of security is to prevent against attackers, even when that attacker is an insider.


i've asserted in the past that in the 80s there was starting to be a lot of work on insider threats and things like collusion ... aka when you have countermeasures against single insider threats ... then start dealing with combination of insiders and collusion threats as attacks on the single point security processes.

introduction of internet has tended to obfuscate the insider issues ... not necessarily reducing insiders as the major points of fraud ... but the possibility of outsider attacks can obfuscate source of the exploits. hopefully at some point ... the environment gets back to the state-of-the-art of the early 80s ... outsiders being pretty well excluded as point of attack ... and having to worry about collusion among insiders ... frequently lots of compensating processes.

strong authentication can help wall out outsiders ... but also can act as possible deterrent (along with effective audit log) for insiders, increasing the probability that they could be successfully prosecuted in cases of fraud (as well as possibly catching various acts early so that in some instances the activities can be reversed or abrogated .... i'm partial to the term abrogated every since i ran across it in the 370 architecture redbook).

use of logging for integrity has been around for some time in numerous guises ... including database acid properties ... however this last rsa conference had some track(?) that was sort of logging as the new technology for security integrity.

in the financial world ... risk management may include preventing attacks ... but it may also be about catching and reversing the effects of attacks.

some drift with old post
https://www.garlic.com/~lynn/aepay3.htm#riskm The Thread Between Risk Management and Information Security

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Ancient history

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ancient history
Newsgroups: sci.crypt
Date: Sun, 12 Jun 2005 13:36:45 -0600
Paul Rubin <http://phr.cx@NOSPAM.invalid> writes:
A browser does much more than render HTML. It has a fancy GUI instead of a crude command line interface; it gets and processes data from multiple network sources in parallel; it renders images; it interprets javascript, and it sprouts improvements all the time in response to technical developments and user requests. Yes, you could maybe do a more careful job writing something minimal and strictly standard-conformant; but we're talking about full-featured, responsive browsers of the type that actual users want to actually use.

when we were doing this stuff for the original payment gateway with this small client/server company that wanted to do payment transactions on their server ... we could specify some amount of the integrity and security implementation & operational requirements.
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

i've more recently done some postings in some SOA threads that it might be considered one of the first web SOA implementations.
https://www.garlic.com/~lynn/2005i.html#42 Development as Configuration
https://www.garlic.com/~lynn/2005i.html#43 Development as Configuration
https://www.garlic.com/~lynn/2005i.html#44 SqlServerCE and SOA - an architecture question

however, we had no approval/veto authority about what went on in the client implementation and much of the client/server interactions ... primarily just limited to the server and the payment gateway interactions (although i did give some presentations on business critical dataprocessing requirements for network implementation that had some number of the client implementors present).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Public disclosure of discovered vulnerabilities

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Public disclosure of discovered vulnerabilities
Newsgroups: sci.crypt
Date: Sun, 12 Jun 2005 14:57:02 -0600
obie writes:
I didn't mean to imply it was perfect, I asked if anyone knew of a currently valid local root exploit.

some analysis spring 2004
https://www.garlic.com/~lynn/2004e.html#43 security taxonomy and CVE

of the CVE database
http://cve.mitre.org/

of some simple stuff from the cve discription.

I've had a couple conversations with the cve people about some variability in the descriptions ... sometimes describing cause, sometimes describing results, sometimes giving both. they claimed that they are now trying to provide more uniform structure in the descriptions.

from analysis last spring ... mostly simple count of CVE entries with specific word or word-pairs.

....
1246 mentioned remote attack or attacker 570 mentioned denial of service 520 mentioned buffer overflow 105 of the buffer overflow were also denial of service 76 of the buffer overflow were also gain root

.... some counts of items that mention root
root access 87 root privileges 151 gain root 183 root 294

... doesn't say root ... but
gain privileges 187

.... past posts with similar references:
https://www.garlic.com/~lynn/aadsm18.htm#10 E-commerce attack imminent; Sudden increase in port scanning for SSL doesn't look good
https://www.garlic.com/~lynn/2004f.html#20 Why does Windows allow Worms?
https://www.garlic.com/~lynn/2004h.html#2 Adventure game (was:PL/? History (was Hercules))
https://www.garlic.com/~lynn/2004j.html#37 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004j.html#58 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004q.html#74 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#28 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#0 [Lit.] Buffer overruns

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM/Watson autobiography--thoughts on?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM/Watson autobiography--thoughts on?
Newsgroups: alt.folklore.computers
Date: Sun, 12 Jun 2005 19:15:03 -0600
hancock4 writes:
I wonder if compatibility was a big issue in the late 1950s.

IBM didn't come up with until the early 1960s Spread conference. I believe part of the motivation was internal--IBM realized it had to support a whole bunch of diverse platforms, including systems software, applications software, and peripherals for each platform. I don't think the other companies had as many models to be concerned about compatibility.

Indeed, even in IBM there was great dissent over compatibility. Haastra wanted to put out a super-1401 using SLT chips. Others were afraid of losing existing customers who wanted more while S/360 was developed and implemented. (Honeywell was pushing a "liberator" converter for 1401 customers.)


i think the testimony was that they had realized it by the late 50s ... and then they were supposed to do something about. whoever was testifying claimed that their company had tried ... but that corporate hdqtrs couldn't get the plant/product line executives to toe the line ... while ibm corporate hdqtrs managed to pull it off.

there is some conjecture that if you are the only company in the market having done the single most important thing correctly ... it would be possible to get some number of the other details wrong ... and still prevail.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM/Watson autobiography--thoughts on?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM/Watson autobiography--thoughts on?
Newsgroups: alt.folklore.computers
Date: Sun, 12 Jun 2005 19:38:32 -0600
hancock4 writes:
They still had a lot of trouble handing time-sharing and DAT-- stuff that had to be added later.

they had 360/67 with dat ... and cp67 for time-sharing. melinda gives some of the history
https://www.leeandmelindavarian.com/Melinda#VMHist
https://www.leeandmelindavarian.com/Melinda/25paper.pdf

i've commented before that the 360/67 dat & time-sharing was actually more successful than possibly many time-sharing systems that might be better known in some of the literature ... number of systems and number of users ... but that the dominance of the corporation's batch customers vastly overshadowed the dat & time-sharing work.
https://www.garlic.com/~lynn/submain.html#timeshare

it was a period of rapid growth and getting payroll out and processing financial transactions, checks, etc .. on the batch systems had a much bigger bank for the buck than a lot of the time-sharing stuff. however, eventually reached some saturation point on all the really important work that needs to be done ... and then along comes more entry level computing that can be used for the less important computing.

while the corporations batch market was much larger than the corporations time-sharing market ... that time-sharing market was still larger than some number of the competitor's time-sharing market (it just that the magnitude of the batch market dwarfed both for quite some time).

some of the smooth progression was interrupted with the side-track into future system
https://www.garlic.com/~lynn/submain.html#futuresys

which was canceled before it was ever announced.

note however, the internal corporate infrastructure was one of the major world-wide users of its own time-sharing product ... and the associated networking infrastructure (built on and in conjunction with that time-sharing product) was larger than the whole arpanet/internet from just about the beginning until around the summer of '85.
https://www.garlic.com/~lynn/subnetwork.html#internalnet

I've asserted that one of the reasons that the internal network was larger than the arpanet/internet from just about the start ... was that every node in the internal network had a flavor of gateway functionality from the start ... which the arpanet/internet didn't get until the great switch-over on 1/1/83. At the time of the switchover arpanet/internet had approx 250 nodes
https://www.garlic.com/~lynn/subnetwork.html#internet

which was much smaller than the internal network ... which passed the 1000 node mark a little later in 83
https://www.garlic.com/~lynn/internet.htm#22

for a time even bitnet/earn ... educational network using the internal network technology ... but distinct from the internal network (and not including in size comparisons of the internal network to internet size)
https://www.garlic.com/~lynn/subnetwork.html#bitnet

there were some number of commercial time-sharing offerings built on the technology ... but possibly dwarfing all of them was the internal HONE system ... which was online support for world-wide field, sales, and marketing people
https://www.garlic.com/~lynn/subtopic.html#hone

at the time of the consolidation of all the US HONE datacenters to cal. in the late 70s ... it was starting to push 40k userids ... and the HONE offering was replicated in numerous countries around the world.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM/Watson autobiography--thoughts on?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM/Watson autobiography--thoughts on?
Newsgroups: alt.folklore.computers
Date: Sun, 12 Jun 2005 20:03:28 -0600
hancock4 writes:
Undoubtedly emulation kept a lot of existing customers in the fold. But I think the IBM 'hand-holding' helped attract many new customers.

there has been some conjecture that Amdahl left to do 360 clone because of FS project
https://www.garlic.com/~lynn/submain.html#futuresys

he gave a talk at mit in the early 70s and was asked about raising funding for the company ... he said that they figured that even if ibm were to immediately walk away from 360 (possibly veiled reference to FS ... which was going to be more radically different from 360 than 360 had been from the prior generations). that customers had already spent at least $100b on software application development ... and that would keep Amdahl in the 360/370 clone market at least thru the end of the century (aka even if ibm walked away from 360/370 at that time).

so supposedly Amdahl clones came about because of FS project ... and FS project supposedly came about because of 360 controller clones ... aka
https://www.garlic.com/~lynn/2000f.html#16 FS - IBM Future System

and somewhere there is a write up blaming a 360 controller clone project that i worked on as an undergraduate for (helping?) spawning the 360 controller clone market
https://www.garlic.com/~lynn/submain.html#360pcm

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Firefox Lite, Mozilla Lite, Thunderbird Lite -- where to find

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Firefox Lite, Mozilla Lite, Thunderbird Lite -- where to find
them?
Newsgroups: netscape.public.mozilla.browser,netscape.public.mozilla.general,netscape.public.mozilla.win32
Date: Sun, 12 Jun 2005 20:33:38 -0600
speedy writes:
Are there any lite versions of these programs around? Particularly if it's just basic functionality required?

how 'bout ... 389181 Oct 30 1994 nscape09.zip

... and a blast from the past ...
ATLANTA-Nov. 21, 1994-(BUSINESS WIRE)-Making it easier for businesses and consumers to use and shop the Internet, MCI Monday announced "internetMCI," a portfolio of services featuring such components as a new secure electronic shopping mall, a user-friendly software package for easy Internet access and high-speed network connections to the Internet.

"MCI is making the Internet as easy to use, as accessible and as critical to businesses as today's global phone network," said Timothy F. Price, president of MCI's Business Markets. "With internetMCI, businesses of all sizes will now be able to not only display but also directly sell their goods and services over the Internet. For the 25 million people on the Internet, shopping the Internet will become simple and secure."

The new MCI offering represents the most comprehensive set of Internet services in the industry, according to Price. "There are other companies that offer Internet-related services, but no one else offers the full range of applications software, access, storefronts and consulting services in one package," he said. "We now have everything companies need to promote commerce over the net. This is what American businesses have been waiting for."

Users of internetMCI will be able to browse and shop in MCI's new Internet shopping mall called marketplaceMCI. MCI said it is working with a number of America's most well-known retailers and information providers to design storefronts for them when marketplaceMCI opens early next year. MCI has already begun beta testing on-line electronic shopping with about 40,000 employees.

A key component of internetMCI is a software system developed by Netscape Communications (formerly Mosaic Communications). Using encryption technology from RSA Data Security, the system integrates a number of component into a secure environment.

Included are the Netscape Navigator server and database software for storefront management and secure credit-card clearing. Also included is a digital signature system operated by MCI to certify and identify valid merchants for internetMCI.

The complete system allows consumers to shop and make secure transactions directly over the Internet without the fear of having their credit card number or other sensitive information stolen by electronic eavesdroppers.

The software package also has point-and-click technology that lets consumer and business users easily and quickly browse the Internet's World Wide Web over ordinary phone lines.

"Transaction security is the last major hurdle to making the Internet a viable marketing and distribution channel for businesses," said Price. "By the year 2000, MCI expects commerce on the Internet will exceed $2 billion and be common as catalog shopping is today."

Through an agreement with FTP Software Inc., MCI will provide the Internet Protocol software along with the Netscape software in one easy-to-install package. FTP Software, the leading independent supplier of TCP/IP-based network software, will also provide MCI with integration and support of its software.

MCI will offer internetMCI software to customers at prices starting as low as $49.95. The internetMCI software will also be included at no additional charge to customers of networkMCI BUSINESS, an integrated information and communications software package.

MCI will market storefronts to retailers and service companies that want to promote and sell their goods to the estimated 25 to 30 million people who can now access the Internet worldwide. MCI will offer businesses consulting in the design, implementation and management of their storefronts, in addition to the added value of MCI's ongoing promotion and marketing of the new mall services.

MCI To Provide High-Speed Connections to Internet

MCI's internetMCI Access Services will be fully integrated with its existing business long distance services. Internet access will be available in a wide range of methods from switched local and 800 access and dedicated access to more advanced switched data services such as ISDN, frame relay and, in the future, SMDS and ATM.

A full Internet service provider, MCI will offer dedicated access to the Internet from nearly 400 locations in the United States.

Another component of internetMCI is the company's new high-speed connections to the Internet through the new MCI Internet Network. This network is one of the highest capacity, most widely-deployed commercial Internet backbones in the world, providing businesses with direct and reliable connections to the Internet.

Compared to most conventional Internet access networks, MCI Internet Network offers greater transmission speed and capacity because the network operates at 45 megabits per second. Next year, MCI will increase the speed of the MCI Internet Network to 155 megabits per second, capable of transmitting 10,000 pages in less than a second or a 90-minute movie in just three minutes.

MCI Selected as Primary Internet Carrier

Following its selection by some of the major regional Internet provide in the United States, MCI will become one of the world's largest carriers of Internet traffic - carrying more than 40 percent of all the U.S. Internet traffic.

The regional Internet providers BARRnet; CICnet; CSUnet; JVNCnet; Los Nettos; Meri MICHnet; MIDNet; NEARnet; NorthWestNet; SURAnet; and Sesquinet have been a part of the Internet since its inception and have been a major force in the drive towards ubiquitous network connectivity, which has helped make the Internet so popular.

MCI's Internet initiatives are being directed by Vinton G. Cerf, MCI senior vice president for data architecture, and an industry- recognized "Father of the Internet," along with a team of world- class experts on the Internet.

"The Internet is a global resource of unmeasured value and potential to educators, governments, businesses and consumers," said Cerf. "MCI is preserving and enhancing the intelligence and economic power of the Internet while making it easier and more accessible than ever before."

MCI Showcases Interactive Multimedia Message on the Internet

Earlier this month, MCI began an innovative marketing campaign on the Internet that plays off the company's successful Gramercy Press ads for networkMCI BUSINESS. Users of the Internet can, with a click of the mouse, learn more about the characters in the Gramercy Press commercials, even hear their voices or see video images of them.

The campaign, which already has been viewed by more than 100,000 Internet users, has an interactive component that allows Internet users to actually submit their art, poetry or short stories for viewing on the Internet. MCI selects pieces and publishes them on the "net," where they can be viewed by the millions of users of the Internet worldwide.

Internet users can travel to Gramercy Press on their own (address:
http://www.mci.com/gramercy/intro.html) or via "Hotwired," the new on-line spinoff of "Wired" magazine. MCI is a sponsor of the magazine's "Flux" section which offers news about Internet movers and shakers. Hotwired members can reach Gramercy Press at
http://www.hotwired.com (click-on "signal" zone).

"The Internet is a marketer's dream come to life," added Price. "It's full-color, full-motion and full of potential. MCI not only expects to be on the leading edge of marketing its own services on the Internet, but also in the forefront of helping our customers tap the marketing power of the Internet."

For more information on internetMCI, call 800/779-0949.

With 1993 revenue of nearly $12 billion, MCI Communications Corp. is one of the world's largest communications companies. Headquartered in Washington, MCI has more than 65 offices in 58 countries and places. The company's Atlanta-based MCI Business Markets provides a wide range of communications and information services to America's businesses, including networkMCI BUSINESS, long distance voice, data and video services, and consulting and outsourcing services.


--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

virtual 360/67 support in cp67

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual 360/67 support in cp67
Newsgroups: alt.folklore.computers
Date: Sun, 12 Jun 2005 21:33:47 -0600
jsavard writes:
I'm certainly not saying that IBM was incapable of something even *I* figured out how to do.

However, my page does show that there are some complexities involved, which makes it understandable that IBM might have left that feature out on their _first try_, the IBM 360/67.


as i repeated before, virtual 360/67 was eventually in cp67 (by release 3, it just wasn't in the earlier releases)
https://www.garlic.com/~lynn/2005j.html#38 virtual 360/67 support in cp67
https://www.garlic.com/~lynn/2005j.html#50 virtual 360/67 support in cp67

followed by implementation of virtual 370 (including dat/virtual memory ... which had somewhat different layout for control regs and segment/page tables copared to 360/67) in cp67

tss/360 was the "official" corporate time sharing system product for the 360/67 ... and i believe at one point hit something like 1200 employees working on it.

by comparison for the first couple cp67 releases there was something like 12 people total working on both cp and cms at the science center (difference of 100:1)
https://www.garlic.com/~lynn/subtopic.html#545tech

for some more of the lore ... see melinda's history:
https://www.leeandmelindavarian.com/Melinda#VMHist
https://www.leeandmelindavarian.com/Melinda/25paper.pdf

course ... i was also fiddling with it as an undergraduate ... reference to part of presentation i made at share meeting
http://www.share.org/

on extensive kernel changes i had done over six month period after some people from science center had come out and installed cp67 at the university
https://www.garlic.com/~lynn/94.html#18 CP67 & OS MFT14
https://www.garlic.com/~lynn/94.html#20 CP67 & OS MFT14

and even more mind boggling, imagine virtual 390 mainframes implemented on intel processors
https://web.archive.org/web/20240130182226/https://www.funsoft.com/ &
http://www.conmicro.cx/hercules

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

3705 ID Tag

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 3705 ID Tag
Newsgroups: bit.listserv.ibm-main
Date: Sun, 12 Jun 2005 22:46:39 -0600
aw288@ibm-main.lst (William Donzelli) writes:
I am not to familiar - none at all, actually - with the 2914. What is it?

channel/controller switch ... quicky search engine turned up a reference buried in this page:
http://www.freepatentsonline.com/4075693.html

engineering/bldg14 and product test/bldg15 used a number of them. nominally there are to switch controllers between channels ... typically channels on different systems.

bldgs 14/15 typically had half dozen engineering test cells under development ... and 2914s were used to isolate all but the testcell currently scheduled for processor testing.

misc. posts about rewriting i/o subsystem so that they could operate multiple testcells concurrently in operating system environment
https://www.garlic.com/~lynn/subtopic.html#disk

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM/Watson autobiography--thoughts on?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM/Watson autobiography--thoughts on?
Newsgroups: alt.folklore.computers
Date: 12 Jun 2005 23:15:18 -0600
Charles Richmond writes:
Leasing had the advantage that the equipment was *not* taxed, like the capital equipment that you *owned*. And the lease money was a deductable business expense.

i have some recollection of being told that learson was responsible for converting the mainframe leasing install base to sales ... it got him a really great qtr (year?) but it had some later downside effect on ongoing revenue.

http://www-03.ibm.com/ibm/history/exhibits/chairmen/chairmen_5.html

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM/Watson autobiography--thoughts on?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM/Watson autobiography--thoughts on?
Newsgroups: alt.folklore.computers
Date: 13 Jun 2005 08:23:53 -0600
Anne & Lynn Wheeler writes:
some of the smooth progression was interrupted with the side-track into future system
https://www.garlic.com/~lynn/submain.html#futuresys

which was canceled before it was ever announced.


here is time line reference
http://person.wanadoo.fr/jeanbellec/information_technology_3.htm
http://febcm.club.fr/english/information_technology/information_technology_3.htm

with future system reference, sep. 1971:
decision of John Opel to start the Future Systems product line. Design was headed by George Radin from Research Division

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM/Watson autobiography--thoughts on?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM/Watson autobiography--thoughts on?
Newsgroups: alt.folklore.computers
Date: 13 Jun 2005 08:52:38 -0600
and just for some additional drift ... a long account about the unbundling decision, 6/23/89 ... search engine found around the web in several places ...

A Personal Recollection: IBM's Unbundling of Software and Services
http://csdl2.computer.org/persagen/DLAbsToc.jsp?resourcePath=/dl/mags/an/&toc=comp/mags/an/2002/01/a1toc.xml&DOI=10.1109/85.988583

which is a pdf file ... google html rendered version
http://64.233.179.104/search?q=cache:NgIil04v3OEJ:www.kiet.edu/~docs/ejournals/Annals%2520of%2520the%2520history%2520of%2520computing/a1064.pdf+ibm+leasing+mainframe+learson&hl=en

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM/Watson autobiography--thoughts on?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM/Watson autobiography--thoughts on?
Newsgroups: alt.folklore.computers
Date: 13 Jun 2005 08:54:18 -0600
Anne & Lynn Wheeler writes:
and just for some additional drift ... a long account about the unbundling decision, 6/23/89 ... search engine found around the web in several places ...

finger slip ... announced 6/23/69

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

virtual 360/67 support in cp67

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual 360/67 support in cp67
Newsgroups: alt.folklore.computers
Date: 13 Jun 2005 10:07:00 -0600
Anne & Lynn Wheeler writes:
course ... i was also fiddling with it as an undergraduate ... reference to part of presentation i made at share meeting
http://www.share.org/

on extensive kernel changes i had done over six month period after some people from science center had come out and installed cp67 at the university
https://www.garlic.com/~lynn/94.html#18 CP67 & OS MFT14
https://www.garlic.com/~lynn/94.html#20 CP67 & OS MFT14


upcoming share meeting is in boston (8/21-8/26, 2005) ... meeting that i gave the above presentation ... in Atlantic City ... fall '68.

three people from the science center had come out and installed cp67 at the univ. the last week of jan. '68 ... and then i got to go to the spring share meeting in houston where cp67 was officially announced.

june of '68 ... ibm was holding one week education class for cp67/cms in beverley hills. the friday before, several of the primary people from the science center
https://www.garlic.com/~lynn/subtopic.html#545tech
had left to form a cp67 commercial timesharing service
https://www.garlic.com/~lynn/submain.html#timeshare

and I show up on monday ... and somehow get asked to give some of the classes (and i'm still just a lowly undergraduate).

i gave a presentation on history of vm performance at seas (european) share 25th anv. meeting in fall of '86 held at St.Helier, Jersey.
http://www.daube.ch/share/seas01.html

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

3705

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 3705
Newsgroups: bit.listserv.ibm-main
Date: 13 Jun 2005 10:21:57 -0600
howard@ibm-main.lst (Howard Brazee) writes:
Neat. All I have are a few crashed disk packs.

My ideal nerdy gift would be the shell of an old Cray, with the bench around the circular computer.


some of the people at almaden research presented me with front panel of HYPERchannel adapter box ... that had been engraved with some stuff ... including a stylized image of the almaden research bldg.
https://www.garlic.com/~lynn/subnetwork.html#hsdt

cray and thornton had worked together at cdc ... thornton left to form network systems corp ... and produce HYPERchannel for heterogenous high-speed interconnect.

related recent postings (involving both 3705 and hsdt):
https://www.garlic.com/~lynn/2005j.html#59 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005j.html#58 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

More on garbage

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: More on garbage
Newsgroups: sci.crypt,comp.arch
Date: 13 Jun 2005 11:03:23 -0600
"Jon A. Solworth" writes:
Availability is a liveness issue, the other two are safety issues. A system which does *nothing* is *always* safe, and hence by definition the safety issues (confidentiality and integrity) are satisfied.

A system which does absolutely nothing cannot be secure. A system must have some function, and security is there as one of the ways of protecting that function from advesaries.


sporadically over the last 30 plus years ... i frequently ran into the comment that the purpose of security is to make systems unusable (if you can't accomplish anything, then hopefully neither can the attackers) ... and frequently security and human factors can be diametrically opposed.

a simple scenario is 3-factor authentication
something you have
something you know
something you are


where something you know is a shared-secret and the security rules require that a unique shared-secret is required for every different, unique security domain .... leading to current situation involving requirement that people memorize scores of unique shared-secrets that are never written/recorded.

somewhat the opposite is using trivial personal information (and supposedly easily remembered) for something you know authentication shared-secret ... with a large number of different security domains selecting secrets from a small common pool of personal information (ss#, birth date, mother's maiden name, etc).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

More on garbage collection

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: More on garbage collection
Newsgroups: sci.crypt,comp.arch
Date: 13 Jun 2005 11:39:25 -0600
pg_nh@0502.exp.sabi.UK (Peter Grandi) writes:
It is so sad... I remember such tools well, and that very very few people even know that they ever existed or would care is part of my usual refrain ''the lost art of memory management'' and the general ''thirty years of valuable research down the drain of oblivion''.

there was a lot of synergy at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

between the work on systems and the study of systems. some of this sort of started the evolution of performance tuning and management into things like capacity planning.

there was huge amount of benchmarking and statistic gathering ... perparing for the release of the resource manager there were a final phase of 2000 benchmarks that took 3 months elapsed time to run. there was configuration variables and workload variables. the first 1000 or so benchmarks were done with preselected values using past knowledge. for the final 1000 ... there was an apl analytical model ... which had all the prior runs had been input ... and it was modified to select the benchmarking parameters (and look for things like anomolous operating points).

sort of generically, there were activities
• tracing & sampling • modeling • multiple regression analysis

vs/repack from the science center was example tracing ... that then used some cluster analysis and human observation (like for hot spots) ... reduce both working set size as well as page fault rate characteristics.

for ecps ... deciding what high-use kernel pathlength to drop into microcode ... there was tracing ... and then a special m'code change that instruction address sampling.
https://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/94.html#27 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/94.html#28 370 ECPS VM microcode assist

a version of the apl analytical model was deployed on the HONE system
https://www.garlic.com/~lynn/subtopic.html#hone

(which provided world-wide field, sales, marketing support) as the performance predictor. salesman could gather customer performance profile data and input into the performance perdictor and do what-if scenarios (workload &/or hardware changes).

a couple years ago, I ran into a descendent of the performance predictor. it had greatly evolved over the years ... and then somebody had obtained rights to it and ran it thru a apl->c translator and they were using it in consulting business, targeting mostly large complex mainframe operations.

anyway a customer ... had an extremely large business critical application that was tending to fully utilize a very large number of mainframe processors. Extensive I-address sampling had been used to identify hot spots for review and recoding. The modeling tool had also been used to further identify possible bottlenecks that could benefit from redesign and/or rework.

it turns out that in the early work at cambridge, the different methodologies tended to turn up different kinds of performance areas of extreme interest. cambridge had heavily instrumented cp67 and years of 7x24 activity data. it was found that multiple regression analysis of the activity counters could turn up things of interest that weren't identified by either i-address sampling (for hotspots) or modeling (sometimes driven by the same activity count information used in multiple regression analysis).

Anyway, multiple regression analysis of a large number of activity counters turned up something that needed rework (and wasn't identifiable by the other methodologies) and resulted in something like 15 percent overall improvement (and we are talking a very large number of mainframe processors here running this large business critical application).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Question about Dungeon game on the PDP

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Question about Dungeon game on the PDP
Newsgroups: alt.folklore.computers,comp.sys.dec
Date: 13 Jun 2005 12:50:22 -0600
"S.C.Sprong" writes:
There's a Zork in the BSD 2.08 games collection (VAX 11/750). It's a C wrapper which directly calls a binary called 'dungeon'. Including overlays, 'dungeon' is about 392 Kib.

i was scheduled to stop by the tymshare data center and negotiate how to get complete updated copies of all the vmshare stuff on a regular schedule
http://vm.marist.edu/~vmshare/

so i could make it available on internal systems ... including the internal HONE system that provided world-wide support for field, sales & marketing
https://www.garlic.com/~lynn/subtopic.html#hone

... and also to pick up this new thing they had gotten called adventure. since they were a commercial time-sharing service ... when their executives found out that there were game on the system ... they wanted it immediately removed ... but then changed their mind when they were told how much revenue adventure was generating.

i had also mentioned it to a couple people on the internal network ... and couple days later a copy arrived over the internal network from somebody in the UK (who had picked it up at a local univ).

melinda has a cms copy of zork on her webpage
https://www.leeandmelindavarian.com/Melinda#VMHist

scroll down towards the bottom ... it claims that the compressed binary executable file is 162k

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The 8008 (was: Blinky lights WAS: The SR-71 Blackbird was designed ENTIRELYwith slide rules)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The 8008 (was: Blinky lights WAS: The SR-71 Blackbird was designed  ENTIRELYwith slide rules)
Newsgroups: alt.folklore.urban,alt.folklore.computers
Date: 13 Jun 2005 13:49:25 -0600
johnf@panix.com (John Francis) writes:
Well, it's a feature of the new premium digital cable services. Digital/optical cable services have more than enough bandwidth to provide a TV-quality streaming video feed to every consumer; they already offer cable modem service with more throughput than that.

See, for example, Comcast's "ON DEMAND" service. They tout this as having 3000 programs you can choose from. That's probably a bit of an exaggeration, but there seem to be several hundred shows available at any time - usually for a one or two month period. That's more than could be stored on most DVRs, so push technology isn't adequate.

Conspicuous by their absence, at present, are the major broadcast networks.


there were a number of efforts formed in the early 90s that thot video-on-demand was the next big thing. slightly related recent post (late '94):
https://www.garlic.com/~lynn/2005k.html#7 Firefox Lite, Mozilla Lite, Thunderbird Lite -- where to find

i knew people at the time, working on databases infrastructures that supposedly was targeted at deliverying movies to the home consumer market.

from summer of 93:
AMPEX DST 600 SUBSYSTEM AND SGI ONYX PROVIDE VIDEO-ON-DEMAND Anaheim, Calif. -- At SIGGRAPH '93 here this week, Silicon Graphics Inc. demonstrated an Ampex Systems Corp. DST 600 tape drive subsystem interfaced to an Onyx graphics supercomputer acting as a video server.

from spring of 94:
ADSL/ATM TO YIELD RESIDENTIAL BROADBAND NETWORKS/VIDEO ON DEMAND The combination of two key elements, ATM switches and ADSL technology, is the major step forward that will facilitate commercially viable Residential Broadband Networks.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM/Watson autobiography--thoughts on?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM/Watson autobiography--thoughts on?
Newsgroups: alt.folklore.computers
Date: 13 Jun 2005 14:23:45 -0600
haynes@alumni.uark.edu (Jim Haynes) writes:
I remember from the trade literature at the time a different reason for the Amdahl clones. Gene Amdahl always was a fan of the high-performance single processor machines. As an IBM insider he knew there was a pricing formula for the various 360 models which was to maximize profit across the entire line. By his calculation this resulted in the top of the line machines being overpriced in relation to cost of manufacture. He wanted the price of the high-end machines reduced to increase sales and allow him to develop even higher performance models. When top management would not bend to his desires he decided to start a competing company, taking advantage of the price umbrella that IBM had practically guaranteed to him. --

ibm would frequently get the science and engineering of manufacturing production extremely optimized (lots of studies of the manufacturing process, yields, quality, optimizations, etc) ... so the truth might just be the opposite; processors at the knee of the technology curve were the most amenable mass production technology. the high end tended to have much higher upfront R&E costs (pushing the technology) and techniques frequently were much less adaptable to really high volume manufacturing techniques.

The high-end tended to have lower volumes that the mid-range (which almost by definition tended to be at the knee of the price/performance curve) ... and so it was much more difficult to recover the larger upfront R&E costs and/or justify upfront costs of developing extremely high volume manufacturing techniques.

this gets more complicated later in large single chip VLSI designs ... where the complexity and performance of large chips can go up ... but if they manage to capture sufficient market volume ... it can justify huge upfront R&E (both chip design and manufacturing efficiencies) ... and the market volume then can actually result in lower per item price.

the clone market was somewhat different ... he was coming into a market that had large install base of MVS & virtual memory at the high-end. in that period, there was a joke that an avg. MVS shop required an avg. 20 IBMers as part of the care & feeding.

while unbundling had started separate pricing for application and services ... kernel software was nominally free (modulo the number of vendor staff needed to keep in running well). Amdahl first big uptake was in the technical MTS market.

there were two non-strategic virtual memory systems developed for 360/67 ... one was cp67 at the cambridge science center and the other was MTS at UofMich. MTS was ported to 370 virtual memory and was installed at some number of univ. Going after the MTS accounts, Amdahl didn't have to fight customer problems with the large vendor MVS staff not being around any longer. cp67 had morphed into vm370 ... for virtual memory 370s ... and was offered by IBM ... there were large number of places in marketing where it was viewed as non-strategic and customers provided the majority of their own support ... w/o a lot of vendor hand-holding.

the penetration of Amdahl into "true blue" (commercial) accounts was yet to happen when i was getting ready to release the resource manager. one of ibm's premier, extremely large, true blue accounts was considering an Amdahl order. This sort of prompted the whole transition to also charging for kernel software and the resource manager and I got picked to be the guinea pig.

note ... this customer had so many real true-blue MVS systems in the datacenter (in addition to vm) ... I don't think that the customer figured that if one of the machines was a different color ... it would drastically reduce the number of vendor people helping with the care and feeding of MVS. i got to be pretty good friends with customer people at the account ... i was being encouraged to drop by and talk to them as frequently as possible, i think somebody was hoping that if the customer got to really like me, they might cancel the Amdahl order (staving off Amdahl being able to break out of the fringe techy market into the real true-blue commerical world).

recent past posts about unbundling announcement on 6/23/69 ... but charging for kernel software didn't start to happen until the resource manager.
https://www.garlic.com/~lynn/2005g.html#51 Security via hardware?
https://www.garlic.com/~lynn/2005g.html#55 Security via hardware?
https://www.garlic.com/~lynn/2005i.html#46 Friday question: How far back is PLO instruction supported?
https://www.garlic.com/~lynn/2005j.html#30 IBM Plugs Big Iron to the College Crowd
https://www.garlic.com/~lynn/2005k.html#12 IBM/Watson autobiography--thoughts on?
https://www.garlic.com/~lynn/2005k.html#13 IBM/Watson autobiography--thoughts on?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM/Watson autobiography--thoughts on?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM/Watson autobiography--thoughts on?
Newsgroups: alt.folklore.computers
Date: 13 Jun 2005 15:06:46 -0600
being on the right part of the price/performance curve also started to drive the cluster products .... hoping to significantly lower the upfront R&E for the bang for the buck ... which can snowball; greater volume can mean that you can do soemwhat larger upfront R&E amortized over larger number of units.

we worked on that in ha/cmp ...
https://www.garlic.com/~lynn/95.html#13
hoping to leverage commodity parts for both availability as well as scalable performance
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Where should the type information be?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where should the type information be?
Newsgroups: comp.arch.arithmetic,comp.arch,alt.folklore.computers
Date: 13 Jun 2005 21:23:45 -0600
Chris Gray writes:
Now, doing that may sound like a nice "pattern", but really it was just me thinking "gee, I can make this cool new editor without having to write all the hard buffer and file management stuff - just let 'ed' do it". I think this was in the early-to-mid 1970's, way before "programming with patterns" was invented.

undergraduate in the 60s ... after getting cp67/cms at the university ... there was a cms fortran graphics subroutine library for the 2250 vector graphics display (done by lincoln labs) ... i played around interfacing the graphics library as front end to cms edit command.

here is picture of 2250m4
http://www.columbia.edu/cu/computinghistory/2250.html

actually a 2250 with 1130 as controller.

the univerisity had a 2250m1 ... which was direct channel attach to the 360 (no 1130).

cambridge science center had a 2250m4 ... somebody ported spacewars to it (aka the 1130). two players used the 2250 key board split in left & right halves ... with various keys mapped to movement & firing functions.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

More on garbage

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: More on garbage
Newsgroups: sci.crypt,comp.arch
Date: 14 Jun 2005 09:01:39 -0600
daw@taverner.cs.berkeley.edu (David Wagner) writes:
My real motivation was to get people to stop thinking of a one-size-fits-all view of security properties, and to recognize that the set of security properties needed for each application is application-dependent: there is no one answer that will fit all systems. The one-size-fits-all view leads to fallacies, such as thinking that anything that turns an integrity problem into an availability problem is useless (it isn't necessarily useless; whether it is useful or not depends on the application's security requirements). Trying to say things like "for this broad class of systems, availability is always critical" (an exaggeration for effect) strikes me as heading towards the border of dangerous thinking; even if it hasn't yet crossed that border, it might be more productive to focus on specific applications and understand their security requirements individually.

security proportional to risk
https://www.garlic.com/~lynn/2001h.html#61 Security Proportional To Risk

to some extent credit card limit is set proportional to risk assesement.

some gov. stuff wants 30 years confidentiality ... and some commercial stuff has talked about 50 years confidentiality. lot of that stuff may have very low availability requirements ... say a lot less than the 1-800 system wanted or a 911 system ... around five-nines ... less than 5 minutes (outage) per year outage.

when we were doing ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

we talked to people doing 1-800 system. they had an implementation that was claimed to be a hundred percent up when it was up ... but periodically needed to be taken down for maint ... which could be several hrs. they could blow a century worth of downtime (based on 5 minutes per year outage) in a single maintenance session.

some DUKPT ... minor refernece
https://www.garlic.com/~lynn/aadsm19.htm#36

can live with standard DES ... $100 or less transaction that might have a subsecond lifetime. the risk (possibly $100) if the attacker can brute force the DES key in less than the lifetime of the transaction (possibly second or less). and there isn't necessarily a confidentiality requirement ... purely an integrity requirement. The issue is what is the probability that an attacker is going to try and spend a couple million to brute force a derived DES key in less than second for a $100 return ... and how many times might they try it.

basel talks about risk adjusted capital ...
http://www.bis.org/

the capital reserve amount (as security) set aside is related to the calculated risk. one of the battles in the new basel-ii requirements was whether or not to keep the *new* qualitative data section ...
http://www.bis.org/publ/bcbsca.htm#pgtop

when risk adjusted capital requirements up until then had been based primarily on quantitative numbers. the qualitative data stuff disappeared from basel-ii .. but quite a bit of what had been in the qualitative data section now appears in sox.

the pain scenario for security
P ... privacy (or sometimes confidentiality)
A ... authentication
I ... integrity
N ... non-repudiation


can have different requirements for the individual components in different environments ... aka short-lived transactions might have little or no confidentiality requirement ... but higher integrity requirements ... especially if the transaction carries very little personally identifiable information (PII) ... slightly related recent post
https://www.garlic.com/~lynn/aadsm19.htm#35

there was some reference recently to digital certificate based infrastructures focusing heavily on whether 1024 bit or 2048 bit key was needed ... and it was mostly meaningless; that the attackers are finding it easier to exploit other parts of the infrastructure than brute force attacks on the keys (various institutions worried about 30-50 year confidential lifetimes might be more worried about it).

i've written some about most infrastructures out there should be more worried about the overall integrity of the authentication process than possibly the difference between 1024 and 2048 bit keys used for authentication purposes (again this key length distinction may make a whole lot more difference to institutions worried about 50 year confidentiality ... than institutions worried about much shorter period authentication operations) ... or generalizing the security proportional to risk phrase to parameterised risk management. Parameterised risk management could make authorization decisions (analogous to the simple credit card limit) dynamically on a transaction by transaction basis given a broad range of risk/threat factors.

Rather than creating static set of security requirements ... allow the various components of security to be relatively fluid ... and then make dynamic decisions about approval or non-approval based on the specific security component levels for a specific operation.

some of this may be the boyd influence (misc of my past posts)
https://www.garlic.com/~lynn/subboyd.html#boyd
other boyd articles from around the web
https://www.garlic.com/~lynn/subboyd.html#boyd2

who advocated rapid dynamic adjustment to fluid and changing environment ... rather than trying to establish static, cast in concrete configurations.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Ancient history

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ancient history
Newsgroups: sci.crypt,comp.arch
Date: 14 Jun 2005 10:18:09 -0600
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
Which is orthogonal and irrelevant to my point. If nobody knows where the boundary is between overflowing and access to an extended area (i.e. permitted use), then it is impossible to insert such checking correctly. And that is the case.

lets see ... infrastructure not able to determine the bounds .... maybe this has been raised before ...
https://www.garlic.com/~lynn/2005b.html#46 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005b.html#60 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005b.html#64 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#13 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#14 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#29 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#30 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#33 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#37 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#47 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#52 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#15 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005d.html#69 [Lit.] Buffer overruns

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The 8008

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The 8008
Newsgroups: alt.folklore.urban,alt.folklore.computers
Date: 14 Jun 2005 12:37:09 -0600
Steve O'Hara-Smith writes:
There's no point in encrypting free digital broadcasts - with the right hardware you should be able to dump the mpeg stream straight to disc.

back when we were doing hsdt
https://www.garlic.com/~lynn/subnetwork.html#hsdt

we were working with one of the reed-solomon company on FEC for high-speed communication. they had done a lot of the work for the cdrom iso standard for encoding ... and stuff like interleaving (lots of faults are bursts or scratches ... so interleaving can gain some improvement). they also worked on various parts of digital broadcasting technology. claim was that not only did appropriately encoded digital transmission reduce the bandwidth requiremetns (vis-a-vis analog) ... but that the encoding significantly improved the reception quality ... equivalent noise injection in analog & encoded digital ... could result in total analog white-out ... while digital still delivered relatively good quality picture.

the industry meetings in the 90 timeframe with the dept of commerce had a lot of country competitiveness overtones. supposedly if hdtv technology went the wrong way ... that foreign industry could capture the hdtv technology manufacturing market ... and hdtv technology was going to be the basis for all new technologies.

there was issues raised about the fine details of the hdtv spec which might sway competitiveness one way or another. a problem was that there were already some receivers manufactured outside the us that had agile reception technology (same set could process all three major analog conventions ... and work was being done that receivers could dynamically adapt to handle any possible hdtv digital convention).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

More on garbage

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: More on garbage
Newsgroups: sci.crypt,comp.arch
Date: 14 Jun 2005 13:05:43 -0600
Anne & Lynn Wheeler writes:
the pain scenario for security

P ... privacy (or sometimes confidentiality)
A ... authentication
I ... integirty
N ... non-repudiation


i.e. re:
https://www.garlic.com/~lynn/2005k.html#23 More on garbage
another aspect of the different dimensions of PAIN characteristcs as part of security ... is the confidentiality required for credit card transactions on the internet (or anyplace for that matter).

the issue has been that attackers can harvest
https://www.garlic.com/~lynn/subintegrity.html#harvest

the information and that information was sufficient for performing fraudulent transactions.

part of the issue ... is that the value of havesting things like merchant transaction files is worth a whole lot more to the crooks than the resources that are available to merchants for countermeasures ... in part because the information in the transactions logs are required for a wide range of other business processes ... and you just can't make the data totally disappear. ... part of the security proportional to risk
https://www.garlic.com/~lynn/2001h.html#61

i've often joked that you could completely blanket the planet in miles deep encryption ... and there would still be account number leakage because of their use in various business processes.

so ... the x9a10 financial working group was given the requirement to preserve the integrity of the financial infrastructure for all retail payments. the resulting x9.59 standard
https://www.garlic.com/~lynn/x959.html#x959
https://www.garlic.com/~lynn/subpubkey.html#privacy

specifies that

1) x9.59 transactions have to be authenticated
2) PANs used in x9.59 authenticated transactions can't be also used in non-authenticated transactions

So the X9.59 PAN account numbers can still occur all over the place ... and harvesting of the information isn't sufficient for the crooks to generate fraudulent transactions. The issue here is that the business rule application of integrity then significantly minimizes the requirement for privacy in order to provide security. It is also somewhat a recognition that the pervasive business uses of PANs pretty much precludes that any application of encryption would be sufficient to close all the places where PANs can leak out.

Another approach that has been tried is one-time PANs ... once a specific PAN has been used in an authorized transaction ... the same PAN can't be used again in a different authorized payment transaction. Before use, the PANs have to be kept secret ... but after each PAN is used ... it can be utilized all over the place for numerous other business processes ... but can't be used again for another financial approval transactions. Again the issue is that once a PAN is used in a transaction ... there are all sorts of other (many backroom business operations) that subsequently need access to the PAN in connection with various and sundry business processes. You can't make those business processes go away ... and encryption can't be used to plug all the possible leak points.

In the x9.59 scenario the appropriate use of end-to-end business integrity ... significantly mitigates existing fraud prevention requirements/need for preventing PAN information leagage (aka confidentiality and privacy) ... since simple knowledge of the PAN is no longer sufficient to perform fraudulent transaction.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM/Watson autobiography--thoughts on?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM/Watson autobiography--thoughts on?
Newsgroups: alt.folklore.computers
Date: 14 Jun 2005 13:12:39 -0600
hancock4 writes:
Oh yes. The 1401 became the "bargain basement" computer and still quietly marketed for a few years after S/360 came out. The boxes still had a little bit of life in them.

somewhat by the late 50s ... it was starting to be realized that software development and software coversions from one machine to another was becoming a dominant market factor in the computer industry.

the issue of coming up with a broad compatible computer line was an attempt to mitigate such significant cost issues for the customer in the future. that still didn't preclude that there was still going to be a requirement for existing customers to convert whatever they currently had to any new platform.

any existing implementations might even be expected to linger for some time ... and customers might even find that it was cheaper to throw hardware at the application ... than to covert it ... eventually hoping for some future demise ... possibly because of (application) obsolescence.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM/Watson autobiography--thoughts on?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM/Watson autobiography--thoughts on?
Newsgroups: alt.folklore.computers
Date: 14 Jun 2005 14:13:18 -0600
hancock4 writes:
I presume in the early days of computing the hardware was so incredibly expensive compared to programmer wages that software cost wasn't as big as a concern yet. Programmers spent a heck of a lot of time shoehorning applications into a tiny memory space, detecting and recovering from numerous hardware errors, and pushing the technical envelope with fancy tricks.

At some point along the way good programmers became scarce. Further, at some point along the way the curves of the cost of hardware vs. the cost of software crossed and attitudes changed.

I remember a comp sci teacher telling us that Fortran logical IFs were inefficient and to use arithmetic IFs instead (never found out if that was really true on S/360 or B-5500. As a team leader, I pushed COBOL COMP-3 for numeric fields and COMP SYNC for internal fields such as subscripts. I still use that stuff but for modest sized files it doesn't seem to make much run-time difference with today's superfast machines. If my employer's mainframe isn't real busy a complex job runs in less than a second! (And this mainframe does the work of four older ones).

P.S. Saw the write up on your as a computer historian in a recent IBM magazine. Neat article, congratulations!


btw ... has anybody actually seen a hardcopy? they had a photographer come out for a photoshoot ... but pictures don't show up in the online version.

... back to the thread ...

note that it wasn't necessarily either programmer salary or hardware costs that were always the primary factor. whether or not an application was available on the next larger machine (as the company grew) could dominate all costs (or from the other veiwpoint ... costs associated with lack of application availability could dominate).

in more recent scenarios about availability ... when we were doing ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

we talked to various companies about what the costs would be if the application was not available ... these examples are a little more severe than some of the costs associated with lack of application availability from the 50s (however, not having the cost savings from some dataprocessing application can be turned around and viewed as a loss ... and/or to justify hardware and programmer expenses).

one financial company that had an application that managed float on cashflow ... claimed the application earned more in 24hrs than a years lease on 50story office bldg (it was housed in) plus/and years salary for every person that worked in the bldg. (conversely if the application was not available ... they didn't earn that money).

another company with a several hundred million dollar datacenter claimed if the datacenter was down for a week, the loss to the company would be more than the cost of the datacenter (i.e. they easily justified the several hundred million dollar expense for duplicating the datacenter). this was in the era when we coined the terms disaster survivability and geographic survivability to differentiate from disaster/recovery.

for some topic drift ... there is a recent thread somewhat related to how much availability should there be (do applications with privacy requirements require equivalent availability requirements)
https://www.garlic.com/~lynn/2005k.html#23

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

More Phishing scams, still no SSL being used

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: More Phishing scams, still no SSL being used...
Newsgroups: netscape.public.mozilla.crypto
Date: 14 Jun 2005 16:41:16 -0600
"Deacon, Alex" writes:
Do you have any suggestions as to how the setting of these OCSP time values should be done? I guess its not clear to me why you feel the CA's need to agree on this. Why wouldn't the client simply make its decision based on its local time (which I agree may be far from correct) and the values in the response? Clients make these decisions every day with certs, so why would OCSP responses be any different? Is it the "producedAt" time that confuses the issue?

Regarding the various trust models, I agree there are too many choices. The "delegated" trust model is the only one that really makes sense in for large consumer facing PKI's in my opinion.


one of the issues in the CRL push model ... is that its the relying party which is judging the risk (sort of the inverse of trust) ... and they know the basis of their dynamic risk parameters ... one issue is that as the value of the transaction goes up ... the risk goes up. the other is that the longer the time interval ... the bigger the risk.

the problem was that since it is the relying party that is taking the risk ... and understands their own situation ... it should be they that decide the parameters of their risk operation ... i.e. as the value of the transaction goes up ... they may want to reduce risk in other ways ... which might include things like trust time windows.

in normal traditional business scenario ... the relying party is the one deciding how often they might contact 3rd party trust agencies (i.e. example like credit bureaus).

PKI/certificate operations have frequently totally inverting standard business trust processes. instead of the relying party being able to make contractual agreements and make business decisions supporting their risk & trust decisions .... the key owner has the contractual agreement with any 3rd party trust operation (i.e. the key owner buys a certificate from the CA).

The digital certificate model has been targeted at the offline business situation where the relying party had no other recourse to the real information (sort of the letters of credit scenario from the sailing ship days). This sort of continued to exist in market niches where the value of the operation didn't justify the relying party having direct and timely access to the real information. The problem was that as the internet as become more & more ubiquitous and as the cost of direct and timely access to the real information has dropped ... digital certificates are finding the low/no-value market segment shrinking (as the cost of direct access to the real information drops, relying parties can justify using real information in place of stale, static certificates for lower & lower valued operations).

A problem facing a PKI/certificate model is that

1) business solution that was designed to solve a problem that is rapdily disappearing ... relying party unable and/or couldn't justify direct and timely access to the real information (in lieu of stale, static certificate information)

2) tends to have been deployed where the contractual business relationships didn't follow common accepted business practices.

>From a different standpoint ... rather than having propagated trust pushed to the relying party ... the standard business model has the relying party making the decision about the required level of integrity and trust for the business at hand and then tends to pull the information whenever economically and practically feasable.

The original PKI/certificate model was targeted at the market segment where the relying party didn't have recourse that was practically feasable (for timely and direct access to the real information). As the practical issues of direct and timely access to the real information have been deployed, PKI/certificate business operations have attempted to move into the market segment where it may still not be economically justified for the relying party to have direct and timely access to the real information (and where the relying party has direct business control over those operations).

However, with not only ubiquitous, online environment coming about ... but the rapid decline in the cost of ubiquitous online environment ... it is easier and easier for relying parties to justify direct and timely access to the real information .... leaving the no-value market niche for the PKI/certificate business operation. One business downside is that when trying to address the no-value market niche ... it may be difficult to convince relying parties to pay very much for certificates in support of no-value operations.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Public disclosure of discovered vulnerabilities

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Public disclosure of discovered vulnerabilities
Newsgroups: sci.crypt,comp.arch
Date: 14 Jun 2005 17:25:07 -0600
"Jon A. Solworth" writes:
By the way, I think Jails/sandboxes/VMs are great but I think that more mechanism is needed.

total aside ... precursor to eros was gnosis. gnosis was developed by tymshare sort of out of their commercial mainframe vm time sharing system ...
https://www.garlic.com/~lynn/submain.html#timeshare

when tymshare was bought ... some number of things were spun-off. I got brought in for due diligence evaluation on gnosis for the keykos spin-off (somewhere in a box i may have some sort of gnosis specification document)

for only a little (VM) topic drift ...
https://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

some keykos references
http://cap-lore.com/CapTheory/upenn/
http://www.agorics.com/Library/keykosindex.html

random past gnosis/keykos/eros postings:
https://www.garlic.com/~lynn/2000f.html#69 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#22 No more innovation? Get serious
https://www.garlic.com/~lynn/2001b.html#73 7090 vs. 7094 etc.
https://www.garlic.com/~lynn/2001g.html#33 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001g.html#35 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001n.html#10 TSS/360
https://www.garlic.com/~lynn/2002f.html#59 Blade architectures
https://www.garlic.com/~lynn/2002g.html#0 Blade architectures
https://www.garlic.com/~lynn/2002g.html#4 markup vs wysiwyg (was: Re: learning how to use a computer)
https://www.garlic.com/~lynn/2002h.html#43 IBM doing anything for 50th Anniv?
https://www.garlic.com/~lynn/2002i.html#63 Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002j.html#75 30th b'day
https://www.garlic.com/~lynn/2003g.html#18 Multiple layers of virtual address translation
https://www.garlic.com/~lynn/2003h.html#41 Segments, capabilities, buffer overrun attacks
https://www.garlic.com/~lynn/2003i.html#15 two pi, four phase, 370 clone
https://www.garlic.com/~lynn/2003j.html#20 A Dark Day
https://www.garlic.com/~lynn/2003k.html#50 Slashdot: O'Reilly On The Importance Of The Mainframe Heritage
https://www.garlic.com/~lynn/2003l.html#19 Secure OS Thoughts
https://www.garlic.com/~lynn/2003l.html#22 Secure OS Thoughts
https://www.garlic.com/~lynn/2003l.html#26 Secure OS Thoughts
https://www.garlic.com/~lynn/2003m.html#24 Intel iAPX 432
https://www.garlic.com/~lynn/2003m.html#54 Thoughts on Utility Computing?
https://www.garlic.com/~lynn/2004c.html#4 OS Partitioning and security
https://www.garlic.com/~lynn/2004e.html#27 NSF interest in Multics security
https://www.garlic.com/~lynn/2004m.html#29 Shipwrecks
https://www.garlic.com/~lynn/2004m.html#49 EAL5
https://www.garlic.com/~lynn/2004n.html#41 Multi-processor timing issue
https://www.garlic.com/~lynn/2004o.html#33 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004p.html#23 Systems software versus applications software definitions
https://www.garlic.com/~lynn/2005b.html#6 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005b.html#7 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005b.html#12 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#67 intel's Vanderpool and virtualization in general
https://www.garlic.com/~lynn/2005d.html#43 Secure design
https://www.garlic.com/~lynn/2005d.html#50 Secure design
https://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
https://www.garlic.com/~lynn/2005.html#7 How do you say "gnus"?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Banks

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Banks
Newsgroups: bit.listserv.ibm-main
Date: 14 Jun 2005 20:42:21 -0600
re:
https://www.garlic.com/~lynn/2005j.html#52 Banks
https://www.garlic.com/~lynn/2005j.html#53 Banks

harvesting related posting from today
https://www.garlic.com/~lynn/2005k.html#26

in a thread about how much of various kinds of security might be needed
https://www.garlic.com/~lynn/2005k.html#23

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The 8008 (was: Blinky lights WAS: The SR-71 Blackbird was designed ENTIRELYwith slide rules)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The 8008 (was: Blinky lights WAS: The SR-71 Blackbird was designed ENTIRELYwith slide rules)
Newsgroups: alt.folklore.computers
Date: 14 Jun 2005 21:53:16 -0600
forbin@dev.nul (Colonel Forbin) writes:
A high school student working as a waitron in a restaurant can easily capture dozens of credit card numbers in a half hour, including the expiry date and the "authentication code."

There was a somewhat dated Dilbert strip to this effect.


somewhat related posting
https://www.garlic.com/~lynn/2005k.html#26 More on garbage
started off here
https://www.garlic.com/~lynn/2005k.html#23 More on garbage

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

How much RAM is 64K (36-bit) words of Core Memory?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: How much RAM is 64K (36-bit) words of Core Memory?
Newsgroups: alt.folklore.computers
Date: 14 Jun 2005 22:03:21 -0600
"Phil Weldon" writes:
On one side is core memory; multiple microsecond cycle time (cycle because reading a core memory bit requires inverting, then restoration hugely expensive ~ $0.20 pet bit per month many different types of incompatible memory organization and data formats bigger than a breadbox ( ~ 30,000 bits)

original 360 were 30, 40, 50, 60, 62, & 70.

60, 62 & 70 were going to have one mic core store (with one mbyte ... four "core boxes" ... four way interleave and 8byte fetch/store). 70 was going to be hard-wired, faster version of the 60. 62 was going to be a 60 ... with virtual memory/dat-box.

before they shipped, 750ns core memory technology was developed, the 60, 62, 70 never shipped, and upgraded 65, 67, & 75 shipped with the 750ns memory.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

How much RAM is 64K (36-bit) words of Core Memory?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: How much RAM is 64K (36-bit) words of Core Memory?
Newsgroups: alt.folklore.computers
Date: 14 Jun 2005 22:21:02 -0600
"Phil Weldon" writes:
Also, in the epoch of core memory, great emphasis was placed on programing to reduce memory requirements even at the expense of execution time because memory was so expensive and I/O so slow.

If you want a number just divide the number of bits in a word by 8 to get rough equivalence. It won't mean much, but there it is. As far as remember, 'core memory' had come and gone by the time the IBM System 360 popularized 'byte'. In fact, 'core memory' had a pretty short run.


I/O was relatively faster ... i've claimed that ckd (count-key-data) disk
https://www.garlic.com/~lynn/submain.html#dasd

was trading off relatively abundant i/o capacity against relatively scarce and expensive real memory. indexes for files and libraries were kept on disk and I/O search commands could be used to take strings from main memory and look for the corresponding value out in disk structures. a disk volume directory ... VTOC (volume table of contents) would use multi-track search ... doing filename lookup search for every file open operation. library files ... PDS (partition data set) also used multi-track search of the library index to find specific members in the libary.

my mid-70s, the trade-off started to change ... with i/o thruput becoming a significantly more constrained resource ... and much larger real memories were available. caching of indexes, data, files, etc are taken for granted now (the reverse of the 60s, using abundant real memory as trade-off to relatively scarse i/o capacity) ... but back then, they remained all on disk.

i was once called into a large national retail operation that ran large multisystem os batch environment that was regularly having servere thruput problems. after looking at tons of data trying to correlate disk usage and thruput across multiple systems (sharing the same disks ... but each system only reporting their individual disk usage activity) ... I started to zoom in on the problem.

turns out that the had a shared application library/PDS ... shared across all systems. The library/PDS had a three 3330 PDS directory. Everytime an application library member was fetch ... it had to do a multi-track search of the 3 cylinder PDS directory. 3330 cylinder had 19 tracks ... avg. search 1.5 cylinders or approx. 29 tracks. 3330 spun at 3600rpm or 60revs/sec. multitrack search of 29 tracks took just under .5seconds ... during which time the drive, controller and channel were all busy. In this condition ... the avg. application member loading per second was just under two ... this was aggregate across all machines in the datacenter (all bottlenecking on the same disk shared library). PDS and multitrack search are still around ... as opposed to other environments that make extensive use of electronic memory for caching of disk and file structures as well as actuall data.

in the late 70s i was started to make comments about the drastic reduction in disk relative system thruput ... at one point making the observation that disk relative system thruput had declined by a factor of 10 over a period of 15 years. the disk division didn't care for this and assigned their performance group to refute the statement. after a couple months, they came back and essentially said that i had slightly understated the problem (aka if processor and memory had increased by factors of 50 and disk had increase by factor of less than five ... then the disk relative system performance had declined by a factor of 10).

misc. past refs on this
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/94.html#43 Bloat, elegance, simplicity and other irrelevant concepts
https://www.garlic.com/~lynn/94.html#55 How Do the Old Mainframes Compare to Today's Micros?
https://www.garlic.com/~lynn/95.html#10 Virtual Memory (A return to the past?)
https://www.garlic.com/~lynn/98.html#46 The god old days(???)
https://www.garlic.com/~lynn/99.html#4 IBM S/360
https://www.garlic.com/~lynn/99.html#112 OS/360 names and error codes (was: Humorous and/or Interesting Opcodes)
https://www.garlic.com/~lynn/2001d.html#66 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001f.html#62 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001f.html#68 Q: Merced a flop or not?
https://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
https://www.garlic.com/~lynn/2001l.html#61 MVS History (all parts)
https://www.garlic.com/~lynn/2001m.html#23 Smallest Storage Capacity Hard Disk?
https://www.garlic.com/~lynn/2002b.html#11 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
https://www.garlic.com/~lynn/2002e.html#9 What are some impressive page rates?
https://www.garlic.com/~lynn/2002.html#5 index searching
https://www.garlic.com/~lynn/2002i.html#16 AS/400 and MVS - clarification please
https://www.garlic.com/~lynn/2003i.html#33 Fix the shuttle or fly it unmanned
https://www.garlic.com/~lynn/2004n.html#22 Shipwrecks
https://www.garlic.com/~lynn/2004p.html#39 100% CPU is not always bad
https://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Determining processor status without IPIs

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Determining processor status without IPIs
Newsgroups: comp.arch
Date: 15 Jun 2005 07:39:48 -0600
Joe Seigh writes:
Is there a way in theory to determine the running/not running status of other processors of other processors without resorting to a probably expensive IPI operation? This is in the context of a virtual environment where the processors are virtual and may or may not be running on a real processor at the moment. An obvious solution would be to provide a hypervisor call to provide the virtual processor status but you have the problem of multiple VM OSes and they're barely keeping up with basic minimal simulation as it is. It would be nice if Intel and AMD were a little proactive in the virtualization area and architected it in as part of the basic architecture rather than after the fact with a too little, too late solution.

in the 60s and early 70s ... there were some number of programs that thot they were "stand-alone" on the real machine and do things like TIO busy loops ... or something similar. several things like TIO would enter the hypervisor kernel for emulation in any case (and special traps were inserted to special case some of the more onerous cases).

off the wall reference previously cited yesterday in a different thread:
https://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

the mainframe virtual machine microcode performance assists eventually evolved into also providing the "LPAR" subset. the performance assists still works for the hypervisor operating system ... but a subset of the hypervisor operating system has been instantiated in the microcode of the basic machine ... and makes it possible for installations to partition the machine for production operation. besides the virtualizing tricks for pure hypervisor kernel operation ... quite a few also provide for operation in the LPAR environment.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Determining processor status without IPIs

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Determining processor status without IPIs
Newsgroups: comp.arch
Date: 15 Jun 2005 08:32:51 -0600
Joe Seigh writes:
I'm aware of some of the things VM and its guest machine used to do since I worked in VM development at one time. Guest machines knew if they were running on a virtual processors and would use spin wait loops with hypervisor calls to preempt and not waste cycles spinning. The was feasible since there was only one VM hypervisor and the guest OS were all by the same vendor, IBM.

later on things somewhat improved ... but in the early days ... cambridge and cp67 (and even into the vm370 days) were frequently viewed as the enemy in many quarters ... in part because they appeared to be an internal operation competing with "strategic" efforts for internal resources ... external competiters were sometimes treated better than internal competitors.

i worked on some number of projects where the effort I was on, some construed in competition with official, corporate "strategic" effort ... and the official corporate "stategic" would prefer to subcontract to an outside entity (which provided no internal competition) than deal with another internal operation.

minor ref to one such effort ...
https://www.garlic.com/~lynn/2002g.html#79 Coulda, Woulda, Shoudda moments?

also ... while unbundling had been announced on 6/23/69 ... kernel software was still free ... and all sorts of people were in the habit of extensively modifying kernel source. it wasn't until into the mid-70s that transition started being made to licensing and charging for kernels ... and that happened incrementally. when i was doing the resource manager ... it got selected to be the guinea pig for licensed/priced kernel software (and i got the prize to spend six months on and off with the business people working on the business rules for pricing kernel software).

lots of past unbundling related postings:
https://www.garlic.com/~lynn/99.html#58 When did IBM go object only
https://www.garlic.com/~lynn/2001c.html#18 On RC4 in C
https://www.garlic.com/~lynn/2001l.html#30 mainframe question
https://www.garlic.com/~lynn/2002b.html#27 IBM SHRINKS by 10 percent
https://www.garlic.com/~lynn/2002c.html#28 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002e.html#62 Computers in Science Fiction
https://www.garlic.com/~lynn/2002h.html#44 50 years ago (1952)?
https://www.garlic.com/~lynn/2002p.html#2 IBM OS source code
https://www.garlic.com/~lynn/2002p.html#7 myths about Multics
https://www.garlic.com/~lynn/2003e.html#18 unix
https://www.garlic.com/~lynn/2003e.html#56 Reviving Multics
https://www.garlic.com/~lynn/2003g.html#58 40th Anniversary of IBM System/360
https://www.garlic.com/~lynn/2003g.html#66 software pricing
https://www.garlic.com/~lynn/2003h.html#36 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003k.html#1 Dealing with complexity
https://www.garlic.com/~lynn/2003k.html#46 Slashdot: O'Reilly On The Importance Of The Mainframe Heritage
https://www.garlic.com/~lynn/2004e.html#6 What is the truth ?
https://www.garlic.com/~lynn/2004e.html#10 What is the truth ?
https://www.garlic.com/~lynn/2004f.html#3 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#2 Text Adventures (which computer was first?)
https://www.garlic.com/~lynn/2004m.html#53 4GHz is the glass ceiling?
https://www.garlic.com/~lynn/2005b.html#49 The mid-seventies SHARE survey
https://www.garlic.com/~lynn/2005c.html#42 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#6 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005e.html#35 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005g.html#51 Security via hardware?
https://www.garlic.com/~lynn/2005g.html#55 Security via hardware?
https://www.garlic.com/~lynn/2005i.html#46 Friday question: How far back is PLO instruction supported?
https://www.garlic.com/~lynn/2005j.html#30 IBM Plugs Big Iron to the College Crowd
https://www.garlic.com/~lynn/2005k.html#12 IBM/Watson autobiography--thoughts on?
https://www.garlic.com/~lynn/2005k.html#13 IBM/Watson autobiography--thoughts on?
https://www.garlic.com/~lynn/2005k.html#20 IBM/Watson autobiography--thoughts on?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The 8008 (was: Blinky lights WAS: The SR-71 Blackbird was designed ENTIRELYwith slide rules)

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The 8008 (was: Blinky lights WAS: The SR-71 Blackbird was designed ENTIRELYwith slide rules)
Newsgroups: alt.folklore.urban,alt.folklore.computers
Date: 15 Jun 2005 08:51:17 -0600
Lawrence Statton N1GAK/XE2 writes:
Hell, Barb -- I can't remember what I had for breakfast yesterday, much less what flight I took 16 years ago[1] :). I can guess that it was probably American -- they had a routing BOS/SLC/SJC that I used for most of my Calif <--> Boston travels. On the other hand, I was going to stay with friends in Oakland, so I might have flown into OAK, which case I have no idea what carrier I took. On the gripping hand, I also had a soft spot for Delta, because they served Pepsi products, and at that point in my life I was quite brand-loyal. Now I'm old and my taste buds have all died, so I'll drink just about any brown cold fizzy beverage. Except Moxy.

at the height of the internet bubble, american put in non-stopes between sjc and bos as well as sjc and aus.

in the early 80s, i use to take twa #44 red-eye sfo to kennedy a couple times a month on monday night and return on twa #857(? ... the tel aviv, rome, kennedy, sfo flight) on friday afternoon. twa went backrupt and then i switched to panam (for the monday night redeye). panam then sold its pacific fleet to united to concentrate on atlantic routes. i then switched to american for the monday night redeye ... although sometimes took united (both my twa miles and my panam miles just evaporated).

i do blame twa for direct (non-connecting) flights with change of equipment. in the very early 70s, the overnight parking fee at sfo was such that it was cheaper for twa to fly the short hop to sjc and park the plane overnight there. then in the morning the plane would fly back to sfo. on the sjc->sfo leg, it carried two flight numbers ... one where the equipment continued to seattle ... and the other was a change of equipment in sfo that went to kennedy. the explanation was that on reservation screens all the direct (non-connecting) flights are listed first ... followed by connecting flights. The change of equipment gimmick ... managed to get "connecting" flights listed at the front along with all the other direct flights (direct flights had a much higher probability of being reserved than connecting flights that appeared later on the screen). After that, i started to notice some number of other flights (typically with multiple flight numbers) that involved the "direct" gimmick with change of equipment.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Determining processor status without IPIs

Refed: **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.arch
Subject: Re: Determining processor status without IPIs
Date: Wed, 15 Jun 2005 19:01:59 -0700
here is recent posting of stuff i did as undergraduate for cms when running running in a virtual machine ... to optimize virtualizing overhead (all the source was shipped and lots of customers could make modifications ... this was also before the unbundling announcement)
https://www.garlic.com/~lynn/2005j.html#54 ALLOC PAGE vs. CP Q ALLOC vs ESAMAP

there are a couple other issues when there is paging going on.

1)

if the hypervisor is paging virtual machine address space ... and the virtual machine is a multi-tasking operating system ... then it is possible to reflect some sort of psuedo page fault to the operating system running in the virtual machine ... potentially allowing the virtual operating system to task switch. one of the univ. running cp67 with mvt ("real" address space multitasking batch operating system) modified cp67 to reflect psuedo page faults to mvt ... and mvt to accept the interrupt and attempt to task switch.

ibm in the mid-70s did something as part of the 148 ecps project for vs1 ... where they defined a psuedo pagefault function for vm/370 and modified vs1 to utilize try and task swtich

a couple past posts on psuedo page fault handling
https://www.garlic.com/~lynn/2000g.html#3 virtualizable 360, was TSS ancient history
https://www.garlic.com/~lynn/2002l.html#65 The problem with installable operating systems
https://www.garlic.com/~lynn/2002l.html#67 The problem with installable operating systems

2)

if the hypervisor is paging virtual machine address space ... and the virtual operating system might also be doing paging ... it is possible to get into LRU page replacement conflict. The virtual machine operating system might be searching for the least recently used page to replace and assign to something else. The hypervisor may also be looking for the least recently used page to replace. You can get into some pathelogical situations where the hypervisor selects and replaces a virtual page ... that the virtual guest operating system has just decided to start using for some other purposes. The idea behind LRU is that the least recently used page is supposedly going to be the least likely used page in the future. A LRU algorithm running in a virtual machine effectively invalidates that assumption ... the least recently used page is highly likely to (also) be selected by the virtual operating system for immediate use.

some past posts mentioning running LRU replacement under a LRU replacement (aka LRU algorithm doesn't recurse very well)
https://www.garlic.com/~lynn/94.html#20 CP/67 & OS MFT14
https://www.garlic.com/~lynn/94.html#51 Rethinking Virtual Memory
https://www.garlic.com/~lynn/95.html#2 Why is there only VM/370?
https://www.garlic.com/~lynn/2004l.html#66 Lock-free algorithms

Determining processor status without IPIs

From: lynn@garlic.com
Newsgroups: comp.arch
Subject: Re: Determining processor status without IPIs
Date: Fri, 17 Jun 2005 06:50:50 -0700
Andi Kleen wrote:
On the other hand shadow page tables have trouble managing the Dirty bits properly (or rather if you want to manage them properly you have to eat a lot of additional faults), so if you have guests that rely or better perform better with accurate dirty bits then they are better.

cp67 had
1st level ... or real memory ....
2nd level ... virtual address space for the virtual machine
3rd level ... virtual address space tables done by the virtual machine


the virtual machines virtual address space tables did the mapping from "3rd level" to "2nd level" ... when actually running a virtual address space belonging to the virtual machine ... it used shadow page tables to do the "3rd level to 1st level" mapping. The change/update processes for the shadow page tables followed the architecture rules for the hardware TLB.

2nd level memory had two sets of referenced&changed bits. cp67 could move virtual machine pages into & out of real memory. cp67 maintained reference and changes bits for virtual pages that were actually resident in real memory. in addition there are the virtual reference&change bits ... the (virtual) state of the (virtual) pages in the "2nd level" space.

In effect there were the real hardware reference and changes bits and two sets of "backup" reference and change bits (one for the hypervisor kernel and one for the virtual machine).

Anytime the hypervisor did a change to the real reference and change bits .... the current value of the real hardware reference and change bits were OR'ed to the virtual machine backup bits, the real hardware bits reset to zero ... and the hypervisor bits setting placed in the hypervisor backup bits.

Anytime the virtual machine did a change to the real reference and change bits ... the current value of the real hardware R&C bits were OR'ed to the hypervisor backup bits, the real hardware bits reset to zero ... and the virtual machine bits setting placed in the virtual machine backup bits.

Anytime the hypervisor interrogated the R&C bits, it OR'ed the real hardware bits with the hypervisor backup bits

Anytime the virtual machine interrogated the R&C bits, it OR'ed the real hardware bits with the virtual machine backup bits.

IBM/Watson autobiography--thoughts on?

Refed: **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Subject: Re: IBM/Watson autobiography--thoughts on?
Date: Fri, 17 Jun 2005 09:34:46 -0700
John R. Levine wrote:
>But I am surprised that the _incremental_ cost of CPU time to pre-sort >an input file would be greater than manual sorting, especially when >the sorted file would be stored on a temporary disk file.

I'm not. In that era, there was often a meter on the CPU that timed how much you used it (wall clock time), so the incremental rate was the same as any other rate.


basically leases were somewhat like cellphone billing ... basic plan and possibly a lot for overages ... based on cpu meter.

the meter ran while the processor wasn't in wait state and/or when the channels were active.

one of the things that enabled 7x24 time-sharing service cp67 was the conversion to "prepare" command for telephone lines. typically timesharing users were billed for cpu time used ... but the datacenter was billed for the cpu meter running (which could also run when the processor was idle ... but the channel was active). the prepare command to terminal controller ... basically told the controller to wait for input from the terminal ... but disconnect from the channel. prior to having the "prepare" command in the channel program ... the channel ran just waiting for terminal input (and the cpu meter ran even if nothing else was going on).

misc. timesharing postings
https://www.garlic.com/~lynn/submain.html#timeshare

having prepare command ... allowed service to be up & running and ready for user activity ... but otherwise have the cpu meter stop (and not incurring any leasing charges) when the system was otherwise idle (and therefor not earning any revenue from users).

the 370s still had cpu meters ... the big conversion from lease to purchase still hadn't happen.

the meter on the 370 tended to "coast" for 400milliseconds ... after everything had otherwise stopped ... aka both cpu and channels had to be idle for more than 400 milliseconds for the cpu meter to actually stop. quess which operating system had a kernel process that would wake up every 400 milliseconds?

random past cpu meter posts:
https://www.garlic.com/~lynn/99.html#86 1401 Wordmark?
https://www.garlic.com/~lynn/2000b.html#77 write rings
https://www.garlic.com/~lynn/2000d.html#40 360 CPU meters (was Re: Early IBM-PC sales proj..
https://www.garlic.com/~lynn/2000d.html#42 360 CPU meters (was Re: Early IBM-PC sales proj..
https://www.garlic.com/~lynn/2001d.html#64 VTOC/VTOC INDEX/VVDS and performance (expansion of VTOC position)
https://www.garlic.com/~lynn/2001h.html#59 Blinkenlights
https://www.garlic.com/~lynn/2001m.html#43 FA: Early IBM Software and Reference Manuals
https://www.garlic.com/~lynn/2001n.html#39 195 was: Computer Typesetting Was: Movies with source code
https://www.garlic.com/~lynn/2002i.html#49 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002k.html#64 History of AOL
https://www.garlic.com/~lynn/2002l.html#62 Itanium2 performance data from SGI
https://www.garlic.com/~lynn/2002m.html#61 The next big things that weren't
https://www.garlic.com/~lynn/2002n.html#27 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#52 Computing on Demand ... was cpu metering
https://www.garlic.com/~lynn/2002n.html#63 Help me find pics of a UNIVAC please
https://www.garlic.com/~lynn/2002p.html#37 Newbie: Two quesions about mainframes
https://www.garlic.com/~lynn/2002q.html#51 windows office xp
https://www.garlic.com/~lynn/2003e.html#3 cp/67 35th anniversary
https://www.garlic.com/~lynn/2003k.html#10 What is timesharing, anyway?
https://www.garlic.com/~lynn/2003o.html#23 Tools -vs- Utility
https://www.garlic.com/~lynn/2003p.html#32 Mainframe Emulation Solutions
https://www.garlic.com/~lynn/2005f.html#4 System/360; Hardwired vs. Microcoded

Title screen for HLA Adventure? Need help designing one

Refed: **, - **, - **
From: lynn@garlic.com
Newsgroups: alt.lang.asm,alt.folklore.computers,rec.arts.int-fiction
Subject: Re: Title screen for HLA Adventure? Need help designing one
Date: Fri, 17 Jun 2005 09:51:18 -0700
Jukka Aho wrote:
I think the point in this discussion has been that, without referring to the actual standard number or designation, saying that something is "ANSI" does not mean anything at all. ANSI X3.64 ("Control Sequences for Video Terminals and Peripherals") is one thing, ANSI X3.4-1968 ("American National Standard Code for Information Interchange (ASCII)") is something completely different, ANSI H35.2 ("Dimensional Tolerances for Aluminum Mill Products") is yet another thing again. :)

As far as I know, there is no ANSI-issued standard for IBM Codepage 437.


ibm mainframes had an additional issue with ascii ... which we discovered when we were fitting up an interdata/3 to have a mainframe channel adapter card and programmed to emulate an ibm terminal controller.

ibm terminal controllers had the convention of storing the leading bit off the line into the low-order bit position of a byte ... rather than the high-order bit position of a byte. as a result when terminal ascii appeared in the memory of mainframe processor ... all the ascii "bytes" were bit-reverse. as a result the mainframe ascii<->ebcdic translate tables were for bit-reversed ascii bytes. one of the early tests of the interdata/3 terminal controller emulation had ascii bytes being transferred to the 360 memory non-bit-reversed and coming out garbage after being run thru an ibm bit-reversed ascii->ebcdic translate table.

misc. past posts about doing terminal controller replacement
https://www.garlic.com/~lynn/submain.html#360pcm

wheeler scheduler and hpo

Refed: **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Subject: wheeler scheduler and hpo
Date: Fri, 17 Jun 2005 10:35:07 -0700
reply to recent email question about wheeler scheduler and "HPO"

early "wheeler scheduler" that i did as undergraduate ... went into cp67 ... and then dropped in the morph to vm370. i had done a bunch of "virtual memory management" stuff on cp67 at the science center.
https://www.garlic.com/~lynn/subtopic.html#545tech

a small subset of that was incorporated into base vm370 release 3. then the resource manager ... including the wheeler scheduler ... and a whole bunch of other stuff that included a bunch of restructuring for multiprocessor work ... much of it had been done for the multiprocessor VAMPS project (which was canceled before being announced)
https://www.garlic.com/~lynn/submain.html#bounce

the resource manager was the guinea pig for first charged for kernel code.

full smp multiprocessor was released in vm370 release 4. the problem was that it was dependent on a bunch of restructuring stuff that i had smp and was already out in the resource manager. the problem was that basic kernel stuff related directly to hardware stuff was still free ... and you couldn't have free kernel (smp stuff in release 4) dependent on priced for software (lots of stuff in the resource manager). to resolve this ... abo ut 80-90 percent of code from the resource manager was merged into the "free" kernel. come release 5 ... the remaining resource manager code (including "wheeler" scheduler) was combined with multiple shadow table support and a couple other things for "HPO".

the base kernel was still free ... but the "add-ons" were priced software.

the original cp67 support for virtual machines that supported (virtual) virtual memory ... just kept a single set of shadow page tables around (per virtual machine). The initial HPO code (combined with what had been called the resource manager) kept around multiple sets of shadow page tables (per virtual machine, when running mvs with multiple virtual address spaces ... you didn't have to completely invalidate all the shadow table entries whenever mvs switched address spaces ... you could keep around more state information)

as an aside ... ECPS was done in release 3 plc4
https://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/94.html#27 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/94.html#28 370 ECPS VM microcode assist

some recent posts on being guinea pig for priced kernel software
https://www.garlic.com/~lynn/2005b.html#49 The mid-seventies SHARE survey
https://www.garlic.com/~lynn/2005c.html#42 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#6 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#41 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005g.html#55 Security via hardware?

some recent posting on shadow tables
https://www.garlic.com/~lynn/2005c.html#18 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#58 Virtual Machine Hardware
https://www.garlic.com/~lynn/2005d.html#66 Virtual Machine Hardware
https://www.garlic.com/~lynn/2005d.html#70 Virtual Machine Hardware
https://www.garlic.com/~lynn/2005f.html#45 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005f.html#47 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005h.html#11 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005h.html#17 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005h.html#18 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005.html#54 creat
https://www.garlic.com/~lynn/2005i.html#10 Revoking the Root
https://www.garlic.com/~lynn/2005j.html#38 virtual 360/67 support in cp67
https://www.garlic.com/~lynn/2005k.html#5 IBM/Watson autobiography--thoughts on?

some recent postings mentioning "virtual memory management"
https://www.garlic.com/~lynn/2005b.html#8 Relocating application architecture and compiler support
https://www.garlic.com/~lynn/2005c.html#18 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#38 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005f.html#45 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005g.html#30 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005j.html#54 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005j.html#58 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP

some recent postings mentioning resource manager:
https://www.garlic.com/~lynn/2005b.html#8 Relocating application architecture and compiler support
https://www.garlic.com/~lynn/2005b.html#49 The mid-seventies SHARE survey
https://www.garlic.com/~lynn/2005b.html#58 History of performance counters
https://www.garlic.com/~lynn/2005c.html#6 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#10 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#42 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#6 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#33 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005d.html#37 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005d.html#41 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005d.html#48 Secure design
https://www.garlic.com/~lynn/2005d.html#60 Misuse of word "microcode"
https://www.garlic.com/~lynn/2005e.html#35 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005f.html#46 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005g.html#18 DOS/360: Forty years
https://www.garlic.com/~lynn/2005g.html#55 Security via hardware?
https://www.garlic.com/~lynn/2005h.html#1 Single System Image questions
https://www.garlic.com/~lynn/2005.html#2 Athlon cache question
https://www.garlic.com/~lynn/2005.html#43 increasing addressable memory via paged memory?
https://www.garlic.com/~lynn/2005i.html#39 Behavior in undefined areas?
https://www.garlic.com/~lynn/2005i.html#49 Where should the type information be?
https://www.garlic.com/~lynn/2005k.html#17 More on garbage collection
https://www.garlic.com/~lynn/2005k.html#36 Determining processor status without IPIs

Determining processor status without IPIs

Refed: **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.arch
Subject: Re: Determining processor status without IPIs
Date: Fri, 17 Jun 2005 13:02:07 -0700
Andi Kleen wrote:
I assume you mean the hypervisor with virtual machine here.

The real problem these days seems to be more to manage them for the guest OS. Guest OS do swapping fine on their own and adding another layer in the hypervisor would be just wasteful. And the machines have enough memory to run quite a lot of guests.

If you need to steal pages from a guest use a custom balloning driver in the guest that allocates memory and gives the pages it allocated back to the hypervisor. That seems to be more efficient than having two different swapping mechanisms in guest and hypervisor be fighting with each other.


depends ...

at one extreme is today's mainframe machine hypervisor called LPARs (logical partitions), it is built into the hardware/microcode of the machine ... supporting a limited number of partitions (virtual machines) and many customers now run in LPAR mode as part of their production/normal operation. LPARs support limited number of virtual machines with contiguous, dedicated real storage (using base/bound relocate for virtual memory).

sort of at the other extremee is a posting several years of somebody having created 40,000+ linux virtual machines under the virtual machine software hypervisor. the address space of these virtual machines were paged (no dedicated real storage) the virtual machine software hypervisor happened to be running in a "test" LPAR (i.e. in a LPAR defined virtual machine with limited assigned storage and processor) ... couple old posts referencing the 40,000 linux virtual machine scenario.
https://www.garlic.com/~lynn/2002n.html#6 Tweaking old computers?
https://www.garlic.com/~lynn/2004e.html#26 The attack of the killer mainframes

a flavor of the 40,000 linux virtual machine scenario (usually only a couple thousand) has been used for webserver farms ... where somebody gets their dedicated webserver virtual machine ... isolated/partitioned from other entities. works for webservers that have relatively light to medium loading.

misc past lpar postings
https://www.garlic.com/~lynn/98.html#45 Why can't more CPUs virtualize themselves?
https://www.garlic.com/~lynn/98.html#57 Reliability and SMPs
https://www.garlic.com/~lynn/99.html#191 Merced Processor Support at it again
https://www.garlic.com/~lynn/2000b.html#50 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#51 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#52 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#61 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#62 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000c.html#8 IBM Linux
https://www.garlic.com/~lynn/2000c.html#50 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#68 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000f.html#78 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#3 virtualizable 360, was TSS ancient history
https://www.garlic.com/~lynn/2000.html#8 Computer of the century
https://www.garlic.com/~lynn/2000.html#63 Mainframe operating systems
https://www.garlic.com/~lynn/2000.html#86 Ux's good points.
https://www.garlic.com/~lynn/2001b.html#72 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001e.html#5 SIMTICS
https://www.garlic.com/~lynn/2001e.html#61 Estimate JCL overhead
https://www.garlic.com/~lynn/2001f.html#17 Accounting systems ... still in use? (Do we still share?)
https://www.garlic.com/~lynn/2001f.html#23 MERT Operating System & Microkernels
https://www.garlic.com/~lynn/2001h.html#2 Alpha: an invitation to communicate
https://www.garlic.com/~lynn/2001h.html#33 D
https://www.garlic.com/~lynn/2001l.html#24 mainframe question
https://www.garlic.com/~lynn/2001m.html#38 CMS under MVS
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2002d.html#31 2 questions: diag 68 and calling convention
https://www.garlic.com/~lynn/2002e.html#25 Crazy idea: has it been done?
https://www.garlic.com/~lynn/2002e.html#75 Computers in Science Fiction
https://www.garlic.com/~lynn/2002f.html#6 Blade architectures
https://www.garlic.com/~lynn/2002f.html#57 IBM competes with Sun w/new Chips
https://www.garlic.com/~lynn/2002n.html#6 Tweaking old computers?
https://www.garlic.com/~lynn/2002n.html#27 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#28 why does wait state exist?
https://www.garlic.com/~lynn/2002o.html#0 Home mainframes
https://www.garlic.com/~lynn/2002o.html#15 Home mainframes
https://www.garlic.com/~lynn/2002o.html#16 Home mainframes
https://www.garlic.com/~lynn/2002o.html#18 Everything you wanted to know about z900 from IBM
https://www.garlic.com/~lynn/2002p.html#4 Running z/VM 4.3 in LPAR & guest v-r or v=f
https://www.garlic.com/~lynn/2002p.html#44 Linux paging
https://www.garlic.com/~lynn/2002p.html#54 Newbie: Two quesions about mainframes
https://www.garlic.com/~lynn/2002p.html#55 Running z/VM 4.3 in LPAR & guest v-r or v=f
https://www.garlic.com/~lynn/2002q.html#26 LISTSERV Discussion List For USS Questions?
https://www.garlic.com/~lynn/2003c.html#41 How much overhead is "running another MVS LPAR" ?
https://www.garlic.com/~lynn/2003f.html#56 ECPS:VM DISPx instructions
https://www.garlic.com/~lynn/2003.html#14 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#15 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#56 Wild hardware idea
https://www.garlic.com/~lynn/2003k.html#9 What is timesharing, anyway?
https://www.garlic.com/~lynn/2003l.html#12 Why are there few viruses for UNIX/Linux systems?
https://www.garlic.com/~lynn/2003m.html#32 SR 15,15 was: IEFBR14 Problems
https://www.garlic.com/~lynn/2003m.html#37 S/360 undocumented instructions?
https://www.garlic.com/~lynn/2003n.html#13 CPUs with microcode ?
https://www.garlic.com/~lynn/2003n.html#29 Architect Mainframe system - books/guidenance
https://www.garlic.com/~lynn/2003o.html#52 Virtual Machine Concept
https://www.garlic.com/~lynn/2004b.html#58 Oldest running code
https://www.garlic.com/~lynn/2004c.html#4 OS Partitioning and security
https://www.garlic.com/~lynn/2004c.html#5 PSW Sampling
https://www.garlic.com/~lynn/2004d.html#6 Memory Affinity
https://www.garlic.com/~lynn/2004e.html#26 The attack of the killer mainframes
https://www.garlic.com/~lynn/2004e.html#28 The attack of the killer mainframes
https://www.garlic.com/~lynn/2004f.html#47 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#15 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004j.html#45 A quote from Crypto-Gram
https://www.garlic.com/~lynn/2004k.html#37 Wars against bad things
https://www.garlic.com/~lynn/2004k.html#43 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004m.html#41 EAL5
https://www.garlic.com/~lynn/2004n.html#10 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004o.html#13 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#32 What system Release do you use... OS390? z/os? I'm a Vendor S
https://www.garlic.com/~lynn/2004q.html#18 PR/SM Dynamic Time Slice calculation
https://www.garlic.com/~lynn/2004q.html#72 IUCV in VM/CMS
https://www.garlic.com/~lynn/2005b.html#5 Relocating application architecture and compiler support
https://www.garlic.com/~lynn/2005b.html#26 CAS and LL/SC
https://www.garlic.com/~lynn/2005c.html#56 intel's Vanderpool and virtualization in general
https://www.garlic.com/~lynn/2005d.html#59 Misuse of word "microcode"
https://www.garlic.com/~lynn/2005d.html#70 Virtual Machine Hardware
https://www.garlic.com/~lynn/2005f.html#59 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
https://www.garlic.com/~lynn/2005h.html#19 Blowing My Own Horn
https://www.garlic.com/~lynn/2005h.html#24 Description of a new old-fashioned programming language
https://www.garlic.com/~lynn/2005j.html#16 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005j.html#19 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005k.html#35 Determining processor status without IPIs

Book on computer architecture for beginners

Refed: **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.arch
Subject: Re: Book on computer architecture for beginners
Date: Fri, 17 Jun 2005 16:02:17 -0700
John R. Levine wrote:
I like Blaauw and Brooks, "Computer Architecture", Addison-Wesley, 1997.

It's another 1200 page brick, but it's a nice complement to Hennessy and Paterson, more descriptive and historical. Half the book is a taxonomy of interesting historical computer designs, giving a unique view of how we got to where we are. The authors are old IBMers but the coverage is not unduly slanted toward IBM. Brooks is the same Brooks who wrote the software classic "The Mythical Man-Month."


melinda's virtual machine history at:
https://www.leeandmelindavarian.com/Melinda#VMHist

... from above
One of the first jobs for the staff of the new Center was to put together IBM's proposal to Project MAC. In the process, they brought in many of IBM's finest engineers to work with them to specify a machine that would meet Project MAC's requirements, including address translation. They were delighted to discover that one of the lead S/360 designers, Gerry Blaauw, had already done a preliminary design for address translation on System/360.(15) Address translation had not been incorporated into the basic System/360 design, however, because it was considered to add too much risk to what was already a very risky undertaking.

..... and
(15) G.A. Blaauw, Relocation Feature Functional Specification, June 12, 1964. "Nat Rochester (one of the designers of the 701) told us, 'Only one person in the company understands how to do address translation, and that's Gerry Blaauw. He has the design on a sheet of paper in his desk drawer.'" (R.J. Brennan, private communication, 1989.)

Performance and Capacity Planning

Refed: **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Subject: Re: Performance and Capacity Planning
Date: Sat, 18 Jun 2005 06:40:07 -0700
jmfbahciv@aol.com wrote:
Wouldn't this also limit your SMP to two CPUs? I can't imagine three or four interfering with each other; nothing would get run.

re:
https://www.garlic.com/~lynn/2005j.html#16 Performance and Capacity Planning

before doing release 4 support for 158 & 168 (two-processor) smp, there were two other projects (that were never annonced), VAMPS (a 5-way smp ... implemented with lower level 370 processor that didn't have caches ... and so there wasn't a cache problem) and logical machines (a 16-way smp using 158 engines that didn't implement standard cache consistency ... only compare&swap and a couple other operations).

the strong cache consistency of 370 ... gave problems when they tried to tie a pair of two-processor 3081s into 4-way 3084. the 370 flavor had cache running the same speed as machine cycle. later machines going to larger number of processors started running the cache had much faster cycle speeds than the rest of the infrastructure (so the cross-cache invalidation slow-down didn't slow down the processor cycle speed).

besides the influence of future system on 801/risk
https://www.garlic.com/~lynn/submain.html#futuresys

aka swing the pendulem from the extreme future system hardware complexity to extreme hardware simplicity of 801/risc ... where there were even statements about trading off increased software complexity in 801/risc for simpler hardware.
https://www.garlic.com/~lynn/subtopic.html#801

the strong cache consistency problems in 370 influence "harvard" architecutre and separate I & D caches w/o any provision for cache consistency (even between I & D caches on the same chip). oak ... which was a 4-way (6000) processor complex with shared memory didn't provide for cache consistency. There were two modes ... a virtual "segment" was either defined as "cached" and not consistent or a virtual "segment" could be tagged as consistent and never "cached".

The executive we reported to when we were doing ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

went over to head up powerpc/comserset ... when that effort was started. the transition from rios/power to powerpc took some amount of rework to introduce concept of cache consistency.

Performance and Capacity Planning

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Subject: Re: Performance and Capacity Planning
Date: Sat, 18 Jun 2005 07:07:34 -0700
jmfbahciv@aol.com wrote:
Query: Did you begin to think of the CPU as a device and your equivalent to our CPNSER as a CPU driver? I always found that people who didn't think of CPNSER as a driver had enormous troubles with the concept of user thruput. I never had enough data to conclude this p.o.v. was key to misunderstandings about what a SMP is supposed to be.

re:
https://www.garlic.com/~lynn/2005j.html#16 Performance and Capacity Planning

VAMPS ... which was a 5-way smp ... on a lower 370 model w/o cache put a lot of m'code ... sort of got that way.
https://www.garlic.com/~lynn/submain.html#bounce

it was canceled w/o being announced and predated the work on turning out vanilla smp support in standard vm370 on standard 370 smp. it is also where i came up with the idea that was sort of a global kernel lock ... w/o the spin-lock characteristics ... not having to rework the whole kernel for fine-grain locking ... just certain critical paths ... leaving the rest behind a global kernel lcok.

however, in the VAMPS case ... must of the code in the later vanilla 370 impleemntation that was reworked for fine-grain smp locking ... was actually moved into the microcode of the hardware architecture. most of dispatching, initial interrupt handling and some number of additional privilege instruction simulation (in software in normal vm370) was moved into microcode of the machine. in some sense abstracting the smp dispatching into the hardware of the machine ... was akin to what i432 tried to do later.

in VAMPS, if a processor couldn't continue with the virtual machine ... it would attempt to interrupt into the kernel ... if the kernel was already busy on another processor ... it would queue and "enter kernel" interrupt and go off to look for another virtual machine to dispatch. standard vm370 at the time was spending nominally spending 50-60percent of the time in virtual machine execution and 40-50% in the hypervisor kernel. for VAMPS to be successful ... the amount of time spent in the serialized hypervisor kernel code had to be reduced to less than 25percent (so four processors worth of virtual machine execution didn't saturate 100 percent of single serialized kernel processor)

besides abstracting a lot of transition between hypervisor and virtual machine execution into the microcode of the machine (and parallelized) ... a lot of disk i/o process handling was abstracting and offloaded out into the microcode of the disk controller. Some of this was similar to what was later done in 370/xa for queued i/o interface ... but also for disk could recognize reordering queue and combining operations to optimize disk arm/head operation. this also helped to reduce the pathlength left in the serialized kernel.

when VAMPS was killed then the design was retargted to a purely software implementation. the objective was to have a basically serialized kernel design with global kernel lock ... the highest used part of kernel software was parallelized outside of the global kernel lock ... allowing for 1) maximum amount of parallelization for the minimum amount of smp code changes and 2) rather than have spin lock ... have sufficient parallelization so that interrupts into the kernel could queue request when a kernel lock was entered ... rather than having a global kernel spin lock implementation.

VAMPS was aggresive about abstracting away how many real processors actually existed ... the basic serialized kernel just placed tasks on queue and took stuff off another queue. whatever processors there happened to be ... would pull work off the task queue (placed there by the hypervisor) and run until it absolutely needed serizlied kernel service ... and then it would place work on kernel queue (and potentially go off to see if there was other work to do).

Performance and Capacity Planning

From: lynn@garlic.com
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Subject: Re: Performance and Capacity Planning
Date: Sat, 18 Jun 2005 08:44:39 -0700
re:
https://www.garlic.com/~lynn/2005k.html#45 performance and capacity planning
https://www.garlic.com/~lynn/2005k.html#46 performance and capacity planning

it was interesting period at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

concurrently and/or overlapped ... i was getting to do a bunch of ecps stuff
https://www.garlic.com/~lynn/94.html#21 370 ECPS, VM microcode assist
https://www.garlic.com/~lynn/94.html#27 370 ECPS, VM microcode assist
https://www.garlic.com/~lynn/94.html#28 370 ECPS, VM microcode assist
https://www.garlic.com/~lynn/submain.html#mcode

all the invention and design for VAMPS
https://www.garlic.com/~lynn/submain.html#bounce
https://www.garlic.com/~lynn/subtopic.html#smp

a bunch of hone production stuff,
https://www.garlic.com/~lynn/subtopic.html#hone

a bunch of virtual memory management (shared segments, paged mapped filesystem, etc)
https://www.garlic.com/~lynn/submain.html#mmap
https://www.garlic.com/~lynn/submain.html#adcon

a bunch of benchmarking and performance tuning stuff ... precursor stuff leading up to capacity planning
https://www.garlic.com/~lynn/submain.html#bench

and all the resource manager (including the related business stuff for charging for kernel software) stuff.
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock

misc. recent postings
https://www.garlic.com/~lynn/2005b.html#8 Relocating application architecture and compiler support
https://www.garlic.com/~lynn/2005b.html#49 The mid-seventies SHARE survey
https://www.garlic.com/~lynn/2005b.html#58 History of performance counters
https://www.garlic.com/~lynn/2005c.html#6 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#10 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#18 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#42 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#56 intel's Vanderpool and virtualization in general
https://www.garlic.com/~lynn/2005d.html#2 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2005d.html#6 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#33 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005d.html#37 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005d.html#41 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005d.html#48 Secure design
https://www.garlic.com/~lynn/2005d.html#59 Misuse of word "microcode"
https://www.garlic.com/~lynn/2005d.html#60 Misuse of word "microcode"
https://www.garlic.com/~lynn/2005e.html#35 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005f.html#6 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005f.html#45 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005f.html#46 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005f.html#59 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005f.html#63 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005g.html#17 DOS/360: Forty years
https://www.garlic.com/~lynn/2005g.html#18 DOS/360: Forty years
https://www.garlic.com/~lynn/2005g.html#27 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005g.html#28 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005g.html#30 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005g.html#51 Security via hardware?
https://www.garlic.com/~lynn/2005g.html#55 Security via hardware?
https://www.garlic.com/~lynn/2005h.html#1 Single System Image questions
https://www.garlic.com/~lynn/2005h.html#15 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005h.html#24 Description of a new old-fashioned programming language
https://www.garlic.com/~lynn/2005h.html#38 Systems Programming for 8 Year-olds
https://www.garlic.com/~lynn/2005.html#2 Athlon cache question
https://www.garlic.com/~lynn/2005.html#13 Amusing acronym
https://www.garlic.com/~lynn/2005.html#41 something like a CTC on a PC
https://www.garlic.com/~lynn/2005.html#43 increasing addressable memory via paged memory?
https://www.garlic.com/~lynn/2005.html#54 creat
https://www.garlic.com/~lynn/2005i.html#30 Status of Software Reuse?
https://www.garlic.com/~lynn/2005i.html#39 Behavior in undefined areas?
https://www.garlic.com/~lynn/2005i.html#46 Friday question: How far back is PLO instruction supported?
https://www.garlic.com/~lynn/2005i.html#49 Where should the type information be?
https://www.garlic.com/~lynn/2005j.html#12 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005j.html#16 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005j.html#25 IBM Plugs Big Iron to the College Crowd
https://www.garlic.com/~lynn/2005j.html#29 IBM Plugs Big Iron to the College Crowd
https://www.garlic.com/~lynn/2005j.html#30 IBM Plugs Big Iron to the College Crowd
https://www.garlic.com/~lynn/2005j.html#50 virtual 360/67 support in cp67
https://www.garlic.com/~lynn/2005j.html#54 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005j.html#62 More on garbage collection
https://www.garlic.com/~lynn/2005k.html#0 IBM/Watson autobiography--thoughts on?
https://www.garlic.com/~lynn/2005k.html#4 IBM/Watson autobiography--thoughts on?
https://www.garlic.com/~lynn/2005k.html#5 IBM/Watson autobiography--thoughts on?
https://www.garlic.com/~lynn/2005k.html#6 IBM/Watson autobiography--thoughts on?
https://www.garlic.com/~lynn/2005k.html#10 IBM/Watson autobiography--thoughts on?
https://www.garlic.com/~lynn/2005k.html#11 IBM/Watson autobiography--thoughts on?
https://www.garlic.com/~lynn/2005k.html#12 IBM/Watson autobiography--thoughts on?
https://www.garlic.com/~lynn/2005k.html#13 IBM/Watson autobiography--thoughts on?
https://www.garlic.com/~lynn/2005k.html#17 More on garbage collection
https://www.garlic.com/~lynn/2005k.html#18 Question about Dungeon game on the PDP
https://www.garlic.com/~lynn/2005k.html#20 IBM/Watson autobiography--thoughts on?
https://www.garlic.com/~lynn/2005k.html#21 IBM/Watson autobiography--thoughts on?
https://www.garlic.com/~lynn/2005k.html#27 IBM/Watson autobiography--thoughts on?
https://www.garlic.com/~lynn/2005k.html#28 IBM/Watson autobiography--thoughts on?
https://www.garlic.com/~lynn/2005k.html#36 Determining processor status without IPIs
https://www.garlic.com/~lynn/2005k.html#38 Determining processor status without IPIs
https://www.garlic.com/~lynn/2005k.html#40 IBM/Watson autobiography--thoughts on?
https://www.garlic.com/~lynn/2005k.html#42 wheeler scheduler and hpo

The 8008 (was: Blinky lights WAS: The SR-71 Blackbird was designed ENTIRELYwith slide rules)

From: lynn@garlic.com
Newsgroups: alt.folklore.urban,alt.folklore.computers
Subject: Re: The 8008 (was: Blinky lights WAS: The SR-71 Blackbird was designed ENTIRELYwith slide rules)
Date: Sun, 19 Jun 2005 05:10:14 -0700
Stephen Sprunk wrote:
IP overhead is 2.6% with 1500 byte packets; that's negligible and certainly worth the ability to do other things with your pipe than just stream video.

whatever happened to all the fiber infrastructure battles about every atm cell having 10 percent overhead (which isn't even counting the signal encoding overhead of the next lower level)

Determining processor status without IPIs

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.arch,alt.folklore.computers
Subject: Re: Determining processor status without IPIs
Date: Sun, 19 Jun 2005 05:51:07 -0700
glen herrmannsfeldt wrote:
As I understand it, IBM's VM and OS/VS1 have a way for the guest OS to regain control while one task is paging to dispatch another task. I don't know if others supply this ability.

page fault handshaking ... where vm would try to reflect to vs1 operating system that vm was handling for the vs1 virtual machine ... under the assumption that the fault might be for a specific task (in the vs1 multi-tasking environment) ... allowing vs1 a chance to switch tasks.

it had been done along with ecps for virgil/tully ... an endicott effort for the 138/148 ... attempting to make the environment nearly "vm only" ... i.e. customers would never run the machines in a non-vm environment ... something akin to the current proliferation of LPAR production environments in customer shops. misc. past posts about microcode related enhancements
https://www.garlic.com/~lynn/submain.html#mcode

the idea had originally been done earlier for cp67 and mvt

from melinda's history of vm
https://www.leeandmelindavarian.com/Melinda#VMHist
https://www.leeandmelindavarian.com/Melinda/25paper.pdf

from above:
The process of making guest systems perform better began as soon as the customers got their hands on CP. Lynn Wheeler had done a lot of work on this while he was a student at Washington State, but he was by no means the only one who had worked on it. The CP-67 Project had frequently scheduled sessions in which customers reported on modifications to CP and guest systems to make the guests run better under CP. These customers had measured and monitored their systems to find high overhead areas and had then experimented with ways of reducing the overhead.(108) Dozens of people contributed to this effort, but I have time to mention only a few.

Dewayne Hendricks(109) reported at SHARE XLII, in March, 1974, that he had successfully implemented MVT-CP handshaking for page faulting, so that when MVT running under VM took a page fault, CP would allow MVT to dispatch another task while CP brought in the page. At the following SHARE, Dewayne did a presentation on further modifications, including support for SIOF and a memory-mapped job queue. With these changes, his system would allow multi-tasking guests actually to multi-task when running in a virtual machine. Significantly, his modifications were available on the Waterloo Tape.


Performance and Capacity Planning

Refed: **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Subject: Re: Performance and Capacity Planning
Date: Sun, 19 Jun 2005 06:59:25 -0700
jmfbahciv@aol.com wrote:
Would you give an example or two of your critical paths? It just occurred to me that these might be different because of OS philosophy differences. I'd always assumed that these would be the same, no matter what was running on the hardware.

re:
https://www.garlic.com/~lynn/submain.html#bounce

when i was an undergraduate ... i did a lot of path rewrites of stuff that i thot would likely be high-use ... as well as doing "fastpath" ... special case path thru the code for the most common case. some of that reduced pathlength by factor of 100 times ... and for some general benchmarks ... overall reduction of 80-90 percent. old reference to presentation at gave at Atlantic City share, aug. of 1968:
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
https://www.garlic.com/~lynn/94.html#20 CP/67 & OS MFT14

for ecps ... thet i was doing concurrently with VAMPS ... there was extensive instrumentation for what parts of of kernel to drop into microcode of th (uniprocessor) virgil/tully machines. here is summary of one of the studies
https://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/submain.html#mcode

basically we were told that there would by 6000 bytes of microcode for ecps (vm microcode performance assist) .... that kernel type cp code would drop aproximately byte-for-byte from 370 code into machine microcode ... and that machine micrcode would run about ten times faster than 370 code (i.e. the native machine micrcode engine had about a 10:1 instruction ratio emulating 370 ... so dropped directly into microcode).

in the VAMPS scenario, it was slightly more structured than in the ecps scenario ... where everything was a candiate. in VAMPS there was some logical construction limitations ... i.e. where in the processing path the code actually was.

the top two pathlengths in the ecps study (referenced above), i had extensively optimized repeatedly over the years ... and they still accounted for 15 percent or so of kernel time ... and they were also in area that could also be placed for VAMPS.

the top item is selecting a virtual machine to run ... loading up all the information for running that virtual machine ... and then dispatching the virtual machine. with the queued interface for VAMPS ... the kernel just put tasks on queue for dispatching. the processor microcode selected something on the dispatch queue, locked the queue entry, loaded up the information to run ... and ran it.

the 2nd entry (in the ecps study) was entry to the kernel typically because of 1) page fault or 2) privilege instruction interrupt ... requiring the kernel to simulate on behalf of the virtual machine. in both VAMPS and ecps ... some amount of the microcode involved in execution of privilege instructions ... was enhanced to recognized virtual machine mode ... and as a result would directly execute a "privilege" instruction using virtual machine rules ... bypassing having to enter the hypervisor kernel for simulation of the privilege instruction. for other things that actually required entry into the hypervisor kernel, the VAMPS code would directly handle a lot of the status storing away and attempting to obtain the kernel lock ... and actually entering the kernel. If the VAMPS microcode was block from entering the kernel ... it would queue a super light-weight task against a kernel queue ... and go off to the (microcode) dispatcher to run another task.

from the kernel standpoint ... it saw putting things on a queue ... and re-entry as part of pulling something off the queue. it never actually saw low-level interrupts ... typical of real 360/370 ... and/or spinning for a kernel lock; a processor either entered into kernel mode (because no other processor was currently in kernel mode) ... or it queued a request for kernel mode and went off to see if there was other (non-kernel) work to do.

Performance and Capacity Planning

From: lynn@garlic.com
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: 19 Jun 2005 09:38:58 -0700
Subject: Re: Performance and Capacity Planning
jmfbahciv@aol.com wrote:
IIRC, the only reason JMF invented his spin lock is because KL caches were not write-thru and the other CPU had to wait for the data to get to memory. That's why our SMP has a cache sweep serial number.

I may very well be confused here because all of this is based on my memory of conversations.


360 had "test&set" instruction for locking convention ... basically test a byte for zero ... set it to one if zero ... and indicate instruction condition code ... it was defined as a serialized atomic operation ... across multiprocessors and caches.

at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

charlie was doing a lot of work on cp67 mutliprocessing support and fine grain locking and came up with a new instruction ... it was given the name compare and swap ... because CAS are charlie's initials
https://www.garlic.com/~lynn/subtopic.html#smp

attempting to justify for 370 ... the 370 architecture owners in pok ... said that it wouldn't be possible to justify a new instruction for 370 based solely on multiprocessor use (the view around pok was that test&set was sufficient for multiprocessor operation). as a result, the atomic operations in a multithread environment (either multiprocessor or single processor) were invented. several examples were come up with where multi-threaded applications could perform various kinds of operations w/o having to resort to kernel calls to serialize (if in a single processor environment).

compare&swap was defined as being atomic and serizlizing ... across any number of processors and any kind of cache structure.

Performance and Capacity Planning

From: lynn@garlic.com
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Subject: Re: Performance and Capacity Planning
Date: Sun, 19 Jun 2005 10:10:15 -0700
jmfbahciv@aol.com wrote:
Sheesh. They must have been spending all their knocking each other up. My gut feel says that 25% is too much.

the issue in cp67 was that all time was accounted for .... while in virtual machine mode ... it was all charged to "problem state" of the virtual machine. there were lots of reasons to enter the kernel and "supervisor state" ... lots of things could be performed in the kernel on behalf of the virtual machine in "supervisor state" ... which would also be charged against the virtual machine .... doing things on behalf of the virtual machine.

When i originally started on cp67 there was some non-linear code (linear scanning certain kinds of lists) that grew proportional to the number of tasks and virtual machines. At 35 virtual machines it was hitting 15-20 of total cpu (all kernel supervisor) and not charged to a specific virtual machine ... aka two kinds of kernel supervisor state ... that having to do with general system bookkeeping not charged to a specific user (and had to be amortized across all users as "overhead") and kernel supervisor state that was charged directly to virtual machine associated with kernel activity done directly on behalf of the virtual machine.

when i restructured various paths in the system ... i did away with most every linear scanning for overhead ... reducing cp67 "overhead" to possibly half percent of elapsed time ... even with 75-80 users.

the issue in global kernel spinlock ... was that the standard state of the art for the period ... was that on entry to the kernel (for whatever reason) the kernel interrupt code would spin on the global kernel spinlock ... until that processor obtained the spinlock and could proceed. only one processor could be executing in the kernel at any point in time.

the logic redo for VAMPS ... and then later ported to a purely software implementation ... moved the kernel lock well past the basic interrupt routines ... so that a much smaller portion of the kernel was serizlied, being only able to execute on a single processor at a time. the other VAMPS change ... and larter morped to a purely software implementation was that the global kernel serialization lock wasn't a spinlock ... i initially referred to it as a bounce lock
https://www.garlic.com/~lynn/submain.html#bounce

a processor when it needed certain kinds of kernel function would attempt to obtain the kernel serialization lock ... if it obtained it ... it would proceed as normal. if it failed to obtain the kernel lock ... it would queue a super light-weight thread against the kernel lock and go off and look for something else to do.

so access to certain serialized kernel functions could only be performed on one processor at a time (although on a moment to moment basis ... it could be any processor in the complex acting as the kernel server). Since all requests for those kernel services were serizlied on a single processor at a time ... that means that the total service time available for performing those services is 100 percent of a single processor (couldn't have more than 100 cpu seconds aggregate of serialized kernel time per 100 seconds of real time)

So you have somewhat standard operations research analysis. Say you have four processor system .... where for every 100 seconds of general execution ... you needed 25 second of global serizlized service. With four processors ... there would be 400 seconds of execution in 100 seconds of real time and generating 4*25=100 seconds worth of serialized workload in 100 seconds of real time. With five processor system ... that would result in no processor having to wait on serialized kernel processing. The VAMPS microcode changes actually reduced it to less than 25 seconds of serialized kernel processor per 100 seconds of non-kernel processing .... but the requirement was that it had to be reduced to at least 25 seconds of kernel processor to keep the infrastructure from waiting on kernel services.

now in standard global kernel spin-lock impleemntations of the period ... you didn't actually see processors in wait state for kernel services ... they were all spinning on the kernel spin-locks ... if there was greater demand for serizlied kernel services than 100percent of a single processor.

in the VAMPS ... and later software implementation case with the bounce/queued implementation ... a processor that couldn't obtain the global kernel spin-lock, instead of spinning would queue a super light-weight request (for kernel services) ... and go off and look for other, non-kernel work to do. If it couldn't find other non-kernel work to do, it would enter wait-state ... but it wouldn't be in a tight compute-bound spin-loop.

the super light-weight queueing mechanism was such that on cache machines ... any overhead of actually doing the queue/dequeue operations was more than offset by maintaining kernel instruction cache locality on the same processor (it actually ran faster).

Performance and Capacity Planning

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Subject: Re: Performance and Capacity Planning
Date: Sun, 19 Jun 2005 10:43:54 -0700
jmfbahciv@aol.com wrote:
Oh, then this microcode wasn't in each CPU....Interrupts had to global...or are you talking about CPU-specific interrupts like channels momentarily "owned" by a CPU?

all VAMPS processors ran identical microcode and instructions.
https://www.garlic.com/~lynn/submain.html#bounce

in VAMPS, the global kernel lock metaphor ... just precluded more than one processor at a time executing in the kernel. in the morph to the pure software implementation ... some amount of the kernel was parallelized, maybe a 1500-2000 instructions worth, the rest was left behind a serialized kernel lock. however, the 1500-2000 instructions that were parallelized were the highest use instructions and also implementated the queue/dequeue operations ... allowing processors that were blocked from entering the kernel to go off and attempt to do non-kernel work (rather than spinning on a lock).

standard 360/65 just had shared memory ... but not shared i/o. 360/65 attempting to simulate shared i/o with device controllers that were attached to multiple channels (aka the 360 i/o buses) 360/65 hardware configuration was frequently configured so that the same channel address on both processors connected to the same control unit at the same address.

the 360/67 smp was different ... it had a channel controller box that allowed all channels to be connected-to & addressed-by all processors.

370 reverted to the 360/65 smp model ... requiring shared i/o simulation by having shared controller connection to processor unique channels. you didn't see the return to common channel addressing until 370/xa on the 3081 (the 360/67 also had 32-bit virtual addressing mode, 370 only had 24-bit virtual addressing mode ... it also wasn't until you got to 370/xa on the 3081 that 31-bit virtual addressing was introduced).

370/158 and 370/168 offered a cost reduced, computational intensive two-processor option ... where only one processor had connected channels for i/o. in that scenario ... any application running on the processor w/o i/o connectivity and requested kernel i/o services ... the request had to be handed off to the processor with i/o connectivity. This configuration also tended to have a somewhat unanticipated side-effect that the cache-hit ratio on the processor w/o i/o connectivity tended to go up ... and therefor got more work done. the cache hit ratio on the processor with i/o connectivity could also slightly improve ... because in the two processor case ... the kernel i/o code would only be executing on one processor ... increasing the probability of local cache hit ratio for that part of the kernel.

Determining processor status without IPIs

Refed: **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.arch,alt.folklore.computers
Subject: Re: Determining processor status without IPIs
Date: Sun, 19 Jun 2005 13:22:10 -0700
Eric P. wrote:
So it seems that in the True VM model the host must never punt page fault exceptions to the guest. This would limit the ability to test guest OS's. The host does punt timer interrupts to the guest, which the guest uses to trigger scheduling of applications running on it.

so the cp67/mvt and the vm370/vs1 scenario they had some additional information about the virtual guest. the program status word (PSW) had virtual supervisor/problem state and some other indicators. when the virtual PSW was in (virtual) supervisor state .. you were pretty sure that the operating system was in the kernel ... and so it wouldn't do much good. It was only when the virtual PSW was in (virtual) problem state that you were reasonably sure the guest operating system was running in application space. there was also hints about whether or not the virtual PSW was enabled for (virtual) io interrupts or not, etc.

in the cp67/mvt case ... the mvt guest believed it was running "real" w/o virtual address space support. the mvt guest got reflected a psuedo page fault interrupt from the hypervisor ... which it treated like a need for mvt to suspend the current application and see if it could task switch.this whole processing increased the overhead of providing virtual machine simulation ... but it allowed a guest operating system to get higher thruput than it would have if it was blocked from execution while the hypervisor handled a page fault on behalf of the virtual machine. note that virtual page fault handshaking was an optional feature that could be turned on/off for specific virtual machines.

it was a little more complicated in the vm370/vs1 case ... the vs1 guest could be running a virtual address space ... i.e.

1st level ... 370 real address space 2nd level .... vm370 providing virtual address space for what the virtual machine thot was the real address space 3rd level .... virtual machine providing virtual address space

there would only be an issue with regard to vm370 having a page fault for 2nd level virtual addresses ... which it could reflect to the virtual machine thru page fault handshaking.

the issue of running a page replacement algorithm under a page replacement algorithm is a configuration and load issue ... and can result in some pathelogical performance issues that many might believe to be inexplicable (w/o understanding some of the underlying assumptions behind page replacement algorithms).

this was significantly mitigated in the vm370/vs1 scenario. vs1 was a minimal translation of earlier batch system to virtual address space environment. basically vs1 created a single virtual address space ... and then for the most part pretended it was running on a real machine with real storage of the size of the virtual address space (typical configuration was 4mbyte virtual address space size running on a real 370 with 512k bytes). in the page handshaking scenario ... vm370 might provide a 4mbyte virtual machine address space size ... and vs1 would define a single virtual address space where the size exactly matched the virtual machine storage size. in this scenario while vs1 was running with a single virtual address space ... it wasn't doing any paging (or page replacement) ... since the single virtual address space size and the virtual machine address size was the same.

this had several advantages ... besides avoiding the page replacement under page replacement scenario. vs1 used 2k page options that was originally selected for the smaller real storage 370 sizes originally introduced ... i.e. compacted better on really small real storage. vm370 used 4k page sizes ... which didn't compact as well on really small real storages ... but could cut the number of page faults in half (since twice as much was being transferred at a time) ... and was less of a "compaction" issue was larger and larger real storages sizes become available on 370.

a "large" 370/145 had 512k storage. vm370/vs1 handshaking was introduced in the same time as ecps support for virgil/tully (138/148 follow-on to 135/145). typical 370/148 had 1mbyte of real storage (twice a large 370/145).
https://www.garlic.com/~lynn/submain.html#mcode

the other issue was that i had exceedingly optimized the end-to-end pathlength for turning a page (the pathlength efficiency and accurracy of page replacement), as well as whole io pathlength, task switch overhead, etc.
https://www.garlic.com/~lynn/subtopic.html#wsclock

so in addition to possibly cutting the number of page faults in half when vs1 let vm370 do its paging (using 4k pages instead of vs1 native 2k page sizes) ... my total pathlength for doing a page fault (whether 2k or 4k size) was possible 1/5th to 1/10th the total pathlength that it took a vs1 kernel to handle a page fault (whether 2k or 4k size). This 1/5th to 1/10th value is for straight 370 instruction comparison ... and doesn't include the additional kernel ecps microcode performance assist done for vm370 on virgil/tully machines. If doing 1/2 the page faults, each one at 1/10th the path length ... that yields about 1/20th the overall pathlength. specifically on virgil/tully, the ecps microcode assist might represent an additional improvement of 2-4 times ... say 1/40th to 1/80th.

Encryption Everywhere? (Was: Re: Ho boy! Another big one!)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: bit.listserv.ibm-main
Subject: Re: Encryption Everywhere? (Was: Re: Ho boy! Another big one!)
Date: Mon, 20 Jun 2005 05:49:32 -0700
R.S. wrote:
Obviously no. Everybody talks about it but almost nobody do it. I mean i.e. tape encryption. Regardless of tapes, SSL's, encrypted networks, VPN's somewhere at the end the data appears in unencrypted form. Otherwise it is useless. And this end could be the weakest link! A human seats there and reads the data, maybe copies it, maybe the copy is illegal...

recent thread in sci.crypt & comp.arch on all aspects of security have to be applied in equal strengths. however, this discussion has an example where having strong authentication eliminates the possibility of using information obtained thru evesdropping for fraudulent purposes. if you can't use such harvested information for fraudulent purposes ... then it significantly mitigates the requirement for encrypting the information ....

one way of classifying various components of security is PAIN
P ... privacy ... or sometimes CAIN & confidentiality and encryption
A ... authentication
I ... integrity
N ... non-repudiation


in the referenced example ... sufficiently strong authentication and business rules to eliminate the usefulness of harvesting information for fraudulent purposes can significantly mitigate the requirement for having to hide the information (thru encryption).

https://www.garlic.com/~lynn/2005k.html#26
https://www.garlic.com/~lynn/2005k.html#23

lots of past posts related to account number harvesting
https://www.garlic.com/~lynn/subintegrity.html#harvest

... for some drift ... with respect to the N in pain ... the rsa conference earlier this year had a track on logging and journalling ... also helpful in catching insiders who might be doing bad things.

for additional drift ... there have been some number of threads w/discussions about end-points almost always being more profitable targets (for the crooks) than the wires between the end-points.

Encryption Everywhere? (Was: Re: Ho boy! Another big one!)

Refed: **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: bit.listserv.ibm-main
Subject: Re: Encryption Everywhere? (Was: Re: Ho boy! Another big one!)
Date: Mon, 20 Jun 2005 07:38:14 -0700
lynn@garlic.com wrote:
for additional drift ... there have been some number of threads w/discussions about end-points almost always being more profitable targets (for the crooks) that the wires between the end-points.

re:
https://www.garlic.com/~lynn/2005k.html#55 Encryption Everywhere? (Was: Re: Ho boy! Another big one!)
another past post regarding blanketing the earth under miles of encryption and still having leaks
https://www.garlic.com/~lynn/2004b.html#25

also general fraud postings
https://www.garlic.com/~lynn/subintegrity.html#fraud

misc. past posts about end-points & fraud:
https://www.garlic.com/~lynn/aadsmore.htm#debitfraud Debit card fraud in Canada
https://www.garlic.com/~lynn/aepay3.htm#votec (my) long winded observations regarding X9.59 & XML, encryption and certificates
https://www.garlic.com/~lynn/aadsm11.htm#17 Alternative to Microsoft Passport: Sunshine vs Hai
https://www.garlic.com/~lynn/aepay11.htm#73 Account Numbers. Was: Confusing Authentication and Identiification? (addenda)
https://www.garlic.com/~lynn/aadsm11.htm#21 IBM alternative to PKI?
https://www.garlic.com/~lynn/aadsm11.htm#22 IBM alternative to PKI?
https://www.garlic.com/~lynn/aadsm12.htm#51 Frist Data Unit Says It's Untangling Authentication
https://www.garlic.com/~lynn/aadsm14.htm#4 Who's afraid of Mallory Wolf?
https://www.garlic.com/~lynn/aadsm15.htm#38 FAQ: e-Signatures and Payments
https://www.garlic.com/~lynn/aadsm16.htm#13 The PAIN mnemonic
https://www.garlic.com/~lynn/aadsm17.htm#60 Using crypto against Phishing, Spoofing and Spamming
https://www.garlic.com/~lynn/aadsm18.htm#6 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm19.htm#17 What happened with the session fixation bug?
https://www.garlic.com/~lynn/aadsm19.htm#19 "SSL stops credit card sniffing" is a correlation/causality myth
https://www.garlic.com/~lynn/aadsm19.htm#26 Trojan horse attack involving many major Israeli companies, executives
https://www.garlic.com/~lynn/aadsm19.htm#27 Citibank discloses private information to improve security
https://www.garlic.com/~lynn/2001c.html#58 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2002f.html#27 Security Issues of using Internet Banking
https://www.garlic.com/~lynn/2002f.html#35 Security and e-commerce
https://www.garlic.com/~lynn/2002p.html#50 Cirtificate Authorities 'CAs', how curruptable are they to
https://www.garlic.com/~lynn/2003m.html#51 public key vs passwd authentication?
https://www.garlic.com/~lynn/2005i.html#1 Brit banks introduce delays on interbank xfers due to phishing boom

Secure Banking

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Secure Banking
Newsgroups: comp.security.misc
Date: 21 Jun 2005 14:39:00 -0600
"wayne.taylor2@gmail.com" writes:
I am a final year computing student, doing a project on secure banking, as part of my project there is criteria for the simulation of transactions, as well as designing an industry data model or likeness that banks use.

there is payment network transactions for ISO 8583 financial industry standard ... check the ISO international standards web site
http://www.iso.org

working in the ansi x9a10 financial standard working group, we were charged with preserving the integrity of the financial infrastructure for all retail payments ... and came up with x9.59 financial standard
https://www.garlic.com/~lynn/x959.html#x959

some recent comments in a crypto mailing list
https://www.garlic.com/~lynn/aadsm19.htm#38
https://www.garlic.com/~lynn/aadsm19.htm#39
https://www.garlic.com/~lynn/aadsm19.htm#40

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Book on computer architecture for beginners

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Book on computer architecture for beginners
Newsgroups: comp.arch,alt.folklore.computers
Date: 21 Jun 2005 18:47:07 -0600
glen herrmannsfeldt writes:
IBM would sell it to you until a few years ago.

Now the oldest you can get is the 370 Principles of Operations. Maybe about three times as thick, and very reasonably priced.


you can sort of tell when they switched to script. PoP has a lot of boxes for syntax and diagrams. Early versions were typeset.

then they moved the "red book" architecture manual (distributed in 3-ring red binder) to cms script. Depending on the flag used when you invoked script ... it either printed the full architecture manual (lots of sections discussing justification for instructions, varioius trade-offs considered, engineering notes, etc) ... or just the PoP (principle of ops) subset. During this early period ... they were printed on 1403 ... and I guess pubs were generated using photo-offset from the 1403 output.

You could get fairly high quality 1403 with really good ribbon. However 1403 had small gaps printing the vertical lines (you could come pretty close to solid veritical lines if you reset the 1403 to 8lines/in instead of 6lins/in ... but then you got somewhat smashed characters and 88lines/page instead of 66lines/page) solid veritical lines came back when they started using 3800 laser printer (instead of 1403).

cms script was done at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

and then "G", "M", and "L", invented GML in 69 ... and GML format processing was added to cms script (and generalized markup language was contrived to match their initials). later GML was standardized in ISO as SGML ... which has since begat html, xml, fsml, saml, bpml, et al.
https://www.garlic.com/~lynn/submain.html#sgml

some mention of ibm 3800, cms script ... and the cms script clone from univ. of waterloo ... and the creation of web and html
http://ref.web.cern.ch/ref/CERN/CNL/2001/001/tp_history/Pr/

a picture of 3800
http://ukcc.uky.edu/~ukccinfo/ibm3800.html

a little history of laser printer and mention the first 3800 installation
http://inventors.about.com/library/inventors/blcomputer_printers.htm

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Book on computer architecture for beginners

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Book on computer architecture for beginners
Newsgroups: comp.arch,alt.folklore.computers
Date: 22 Jun 2005 07:58:21 -0600
Del Cecchi writes:
Now Lynn, you could get perfect lines on a 1403. We printed ALDs and WPRINTS on 1403 all the time. I do believe a special chain was used.

del cecchi

PS an ALD was an "automated Logic Diagram" or schematic printed from the netlist (BDL/S).

A Wprint was a print of the wiring on a chip or card printed as a page or group of pages per level.


I have some vague recollection of special train that printed sideways .. so top of page was at the side ... and multiple pages taped together ... not across the perforations ... but across the sides (after the tractor holes were removed). Also, the 1403 would be switched from the normal 6 lines per inch to 8 lines per inch.

you can pretty clearly see the 1403 printing in this vm370 manual
http://www.bitsavers.org/pdf/ibm/370/vm370/GC20-1800-6_VM370intr_Oct76.pdf

the first couple pages, prefaces, etc have been typeset ... then the contents (on the 6th page) changes to 1403 font ... and the first box diagram on page 9 of the introduction (11th page) has broken vertical lines (as an aside ... the two 370 PoPs at the above site are typeset and not 1403).

note also ... that the original script formating controls were runoff-like "dot commnads" (before GML support was added). frequently used were

.rc on
.rc off

and/or

.rc 1 on
.rc 1 off

which indicated revisions. revision codes are indicated by the side-bar next the text. you can see this (also) on page 9 of the introduction (11th page) on the top left ... in the paragraph that starts "VM/370 is designed for ...". again the vertical bars are broken

The revision is for virgil/tully ... 370s 138/148. The following page (12th page) shows a diagram about a 138 configuration (again note the revision bars on the side).

On the following page (13th page), there is an inserted drawing that has continuous lines ... and also a different font. But if move on to the next page (14th page, pg. 12 of the introduction), you have more diagrams with broken vertical lines and also broken line revision bars mentioning VMCF.

misc. recent posts mentioning ecps &/or virgil/tully
https://www.garlic.com/~lynn/2005d.html#41 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005d.html#59 Misuse of word "microcode"
https://www.garlic.com/~lynn/2005f.html#59 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005g.html#16 DOS/360: Forty years
https://www.garlic.com/~lynn/2005g.html#17 DOS/360: Forty years
https://www.garlic.com/~lynn/2005g.html#18 DOS/360: Forty years
https://www.garlic.com/~lynn/2005h.html#24 Description of a new old-fashioned programming language
https://www.garlic.com/~lynn/2005k.html#17 More on garbage collection
https://www.garlic.com/~lynn/2005k.html#38 Determining processor status without IPIs
https://www.garlic.com/~lynn/2005k.html#42 wheeler scheduler and hpo
https://www.garlic.com/~lynn/2005k.html#47 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005k.html#49 Determining processor status without IPIs
https://www.garlic.com/~lynn/2005k.html#50 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005k.html#54 Determining processor status without IPIs

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Worth of Verisign's Brand

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: netscape.public.mozilla.crypto
Subject: Re: The Worth of Verisign's Brand
Date: Wed, 22 Jun 2005 09:43:35 -0700
Anders Rundgren wrote:
If you have a scheme that with limited amount of money and user inconvenince allows a citizen to access potentially thousands of e-gov sites, without using TTPs I (and all e-govs in the World), would like to hear about it.

Replacing the _indeed_ stale cert info with a stale signed account claim would not have any major impact this scenario except for a few saved CPU cycles.

SSL is by no means perfect but frankly; Nobody have come up with a scalable solution that can replace it. To use no-name certs is not so great as it gives user hassles


The issue isn't about TTPs specifically ... it is about PKIs and certificates which have a design point of addressing specific issues involving offline environments; aka the offline email environment of the early 80s where somebody dialed up their local (electronic) post office, exchange email and then hung up. They then possibly had some first-time email from somebody that they had never communicated with before ... and needed some method of finding out some information about the sender. This is somewhat analogous to the "letters of credit" from the sailing ship days.

The issue isn't about TTPs ... it is about trying to apply a solution designed to compensate for being in an offline environment and not having any recourse to the real information (including direct access to the TTPs with the real informationn) to the emerging online environment.

The contention is that in a real online environment, resorting to stale, static information (designed to compensate for lack of access to real online information in an offline enviornment) ... that the stale, static information is of lot fundamentally flawed given timely access to the current real information.

Somewhat as the online environment has become more and more pervasive ... the stale, stale, offline PKI paradigm has attempted to find market niches in the low/no value areas where the relying party can't justify the possible incremental cost of having access to real, timely information. For instance a PKI certificate issued at some point in the past year ... might claim that an individual has a specific bank account. ALl other things being equal, would a relying party (about to execute a high value transaction) prefer to have

1) stale, static information possibly a year old regarding the other party having a specific bank account

or

2) timely, real-time response from the other party's financial institution that the other party not only still has an active account ... but also that the account has sufficient funds to cover the indicated transactions and furthermore the other party's financial institution will stand behind the transfer of those fund.

One of the passing issues for PKI infrastructures moving into the low/no value market segments ... they are less & less likely able to charge any significant amounts for stale, static certificates in support of no/low value operations.

The other issue is that the typical TTP PKI business model is contrary to most standard business practices. In most standard business practices, the relying party contracts directly with a TTP for timely information ... creating some legal obligation on the part of TTP to perform in specific manner. In the TTP PKI certificate-push model, the key owner is contracting with the TTP agency (buying a certificate) which creates various kinds of legal obligation between the TTP agency and the key owner. The key owner then pushes that certificate to a relying party .... where there has been no legal established relationship between the relying party and the TTP agency.

another issue is that in the early 90s, you somewhat found TTPs considering grossly overloading X.509 identity certificates with enormous amounts of privacy information. the problem was that many of the TTPs had no idea what purposes the X.509 identity certificates might be put to use and/or with which relying parties (and what possibly information requirements might unknown and unpredictable relying parties might want).

then as you moved into the mid-90s, some institutions were starting to come to the realization the x.509 identity certificates grossly overloaded with personal information represented significant privacy and liability issues. the result was somewhat retrenching to relying-party-only certificates
https://www.garlic.com/~lynn/subpubkey.html#rpo

however, where a replying party registers a key-owner's public key in some sort of database ... and then issues a stale, static, relying-party-only certificate effectively only containing some sort of account number (or other form of database lookup value) bound to the public key. The key-owner was then to append such certificates in all communication with the relying party. However, it is trivial to demonstrate that the appending of such stale, static, relying-party-only certificates to communication with the relying party is redundant and superfluous i.e. the relying party already is in possession of a superset of the information in the stale, static, relying-party-only certificate.

specifically with respect to SSL certificates
https://www.garlic.com/~lynn/subpubkey.html#sslcert

the issue is that there were concerns about the integriy of the domain name "TTP" providing real-time responses for real-time domain name requests thru-out the world. the browsers would validate the certificate and then compare that the domain name that the user typed in matched the domain name in the certificate.

the problem for the CA SSL PKIs is that they had to validate the information for a requested ssl domain name certificate with the actual authoritative agency (aka TTP) for domain names ... the domain name infrastructure. The CA SSL PKIs had to get various kinds of identification information from the SSL domain name certificate applicant and then perform the time-consuming, expensive and error-prone task of attempting to match it with the identification information on file with the domain name infrastructure as to the owner of the specific domain name.

This, then also put the CA SSL PKIs vulnerable to possible integrity problems with the domain name infrastructure. Somewhat from the CA SSL PKI industry is a proposal to improve the integrity of the domain name infrastructure by having domain name owners register public keys for their domains. Future communication is then digital signed and can be verified with the on-file public key .... not a certificate-less public key operation
https://www.garlic.com/~lynn/subpubkey.html#certless

The other advantage for the CA SSL PKIs is that they can also require that SSL domain name certificates also be digital signed. Then they can use the onfile (certificate-less) public key to change from the time-consuming, expensive and error-prone identification process to a much simpler, less expensive and reliable authentication process (by retrieving the onfile public key for validation of the digital signature on the SSL domain name certificate request).

This however represents something of a catch-22 for the CA SSL PKI industry. If they are able to retrieve trusted onfile public keys from the domain name infrastructure for validating digital signatures (as the root of the trust chain for SSL domain name certificates) ... then it would be technically possible for everybody in the world to also retrieve trusted onfile public keys in their real-time, online domain name resolution requests. If everybody in the world could do real-time retrieval of onfile public keys from the domain name TTP ... for verifying digital signature in communication with servers that they are contacting ... then it obsoletes the requirement for having SSL domain name certificates.


previous, next index - home