From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Cyber attackers empty business accounts in minutes Date: 8 Aug, 2009 Blog: Financial Crime Risk, Fraud and Securityre:
other recent exploits using static &/or harvested data in replay attacks
Credit cards trapped in archaic security era
http://www.zdnetasia.com/blogs/btw/0,3800011236,63012755,00.htm
from above:
Equally disturbing is the unwillingness of local banks to assume
liability for fraud involving transactions made before the credit card
has been reported lost.
... snip ...
Three metro Atlantans indicted for bank fraud conspiracy
http://triangle.bizjournals.com/triangle/othercities/atlanta/stories/2009/08/03/daily104.html
from above:
Three metro Atlanta residents have been indicted on bank fraud
conspiracy and identity theft charges for allegedly stealing Wachvoia
Bank account numbers and draining thousands of dollars from the
accounts.
... snip ...
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Poll results: your favorite IBM tool was IBM-issued laptops. Date: 8 Aug, 2009 Blog: Greater IBMHONE was started after 23jun69 unbundling announcement ... misc. past posts mentioning unbundling
with cp67 virtual machine system ... to give branch office SEs
"practice" using operating systems (in virtual machines) ... since the
"learning" experience as large groups at customers sites had pretty
much been eliminated with starting to charge for SE time. Misc. past
posts mentioning HONE
https://www.garlic.com/~lynn/subtopic.html#hone
the science center had also ported apl\360 to cp67/cms for cms\apl
... reworking it for large virtual memory environments and adding
additional features ... like being able to invoke system services. One
of the early adopters was the business people in Armonk that loaded
the most sensitive of corporate customer information on the cambridge
system ... to run business models written in apl ... misc. past posts
mentioning science center
https://www.garlic.com/~lynn/subtopic.html#545tech
A whole lot of CMS\APL based tools also started being deployed on HONE for sales & marketing support ... until that became to dominate all HONE usage and the virtual operating system use withered away.
Providing HONE with lots of system enhancements and support was one of my hobbies ... and it got me some of my earliest overseas business trips ... as HONE systems were being cloned around the world (one of my first was the move of EMEA hdqtrs from the states to La Defense).
Some of the other stuff is discussed in the "Timeline: The evolution
of online communities" thread ... also archived here:
https://www.garlic.com/~lynn/2009j.html#79 Timeline: The evolution of online communities
https://www.garlic.com/~lynn/2009j.html#80 Timeline: The evolution of online communities
https://www.garlic.com/~lynn/2009k.html#0 Timeline: The evolution of online communities
https://www.garlic.com/~lynn/2009k.html#6 Timeline: The evolution of online communities
https://www.garlic.com/~lynn/2009k.html#9 Timeline: The evolution of online communities
https://www.garlic.com/~lynn/2009k.html#12 Timeline: The evolution of online communities
https://www.garlic.com/~lynn/2009k.html#13 Timeline: The evolution of online communities
https://www.garlic.com/~lynn/2009k.html#19 Timeline: The evolution of online communities
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Cyber attackers empty business accounts in minutes Date: 9 Aug, 2009 Blog: Financial Crime Risk, Fraud and Securityre:
These type of attacks were readily/widely understood in the 90s ... there was
even the EU FINREAD standard as a countermeasure. The issue around the
start of this decade/century was the rapidly spreading rumor that card
acceptor devices weren't practical in the consumer market. misc. past
posts mentioning EU FINREAD standard:
https://www.garlic.com/~lynn/subintegrity.html#finread
The issue was some number of programs attempting to give away serial port card acceptor devices ... which experienced horrible consumer configuration problems. This issue was the numerous and difficult problems with serial port was a major motivation for USB.
The numerous problems with serial port configuration was well understood in the financial industry as recent as 1995. There were some number of presentations in the 1995/1996 era about reasons for migration of online, home banking programs from "custom" dial-up environments to internet. Some online, dial-up, home banking programs claimed to have a software library of more than 60 different (serial port) modem device drivers ... to try an accommodate the wide variety of consumer (serial port) configurations ... as well as extensive call center consumer support for dealing with serial port configuration problems.
The migration of dialup online banking to the internet ... offloaded all the serial port configuration problems on the user's ISP ... and the amortizing dial-up and configuration issues, across a broad spectrum of online uses, would also help motivate standardization & simplification of the problems (pre-installed & configured devices when sold ... instead of after-market install, things like USB, etc).
In any case, all the financial industry institutional knowledge about the difficult configuration problems appeared to evaporate in that five year period ... which then led to rapidly spreading opinion that (serial port, card acceptor) devices weren't practical in the consumer market.
As an aside, the mid-90s financial industry presentations regarding migration of dial-up, home banking to the internet were usually accompanied with statements that online commercial "cash management" systems (business flavor of online home banking) would NEVER be migrated to the internet; because of the security issues (and the much larger financial exposure for business operations compared to most consumer accounts).
.. from the 90s (more than decade ago)... it was well understood that PCs were easily/trivially compromised ... and that there was going to be a long litany of compromises. The simplest were compromises that would record privileged information and forward the information over covert channels to criminals to use for fraudulent purposes.
A countermeasure would be a physically separate hardware token that was required to perform valid transaction (something you have authentication that used more than "static data" and so immune to replay attacks).
The attack for separate authentication device would be PC compromises that ran on the victim PC and directly performed the fraudulent transactions ... using the valid, something you have authentication device. Any PIN-entry requirement (separate static something you know multi-factor authentication) could be recorded and replayed by the covert software.
The countermeasure is a physically separate device for connecting the hardware token and entering the PIN. The compromised PC software wouldn't be able to monitor any such PIN-entries and/or fake such PIN-entry.
The fraud attack to such a compromise is for the covert software running on the victim PC ... would be to impersonate a valid financial transaction but when it came time to actually execute ... change it to something completely different.
Since it has been viewed that PCs are so trivially compromised by a wide variety of attacking software; the EU FINREAD standard attempted to create countermeasures that addressed
1) malicious software evesdrops sensitive information and forwards it to crooks over covert channel,
2) malicious software directly executes on compromised machine (when attached physically unique authentication hardware token is required and impersonates human action).
3) malicious software directly executes on compromised machine and impersonates a valid transaction (when not possible to impersonate human interaction, required for operation of physical unique hardware token).
EU FINREAD was external box that interacts with physically unique something you have authentication hardware token, had physically unique something you know PIN entry, and had physically unique display. Financial transactions were required to be recognized by the EU FINREAD device ... so EU FINREAD could display the actual transaction being authorized.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: lynn@GARLIC.COM (Anne & Lynn Wheeler) Subject: Re: VTAM security issue Newsgroups: bit.listserv.ibm-main Date: 9 Aug 2009 08:57:55 -0700chrismason@BELGACOM.NET (Chris Mason) writes:
The closest (with a networking layer) that might be claimed for SNA might be APPN ... but the original APPN announcement was vetoed by the SNA organizaton ... after 6-8 weeks it was finally announced, but the announcement letter was carefully rewritten to avoid implying any relationship between APPN and SNA. We would periodically hassle the person responsible for APPN to stop playing around with internal stuff and come work on real networking (APPN specification was AWP164).
Back in the early days of SNA ... my wife was co-author of "peer-to-peer" networking (only when SNA had co-opted the term "networking" ... was it necessary to qualify specification "peer-to-peer"). This was (internal specification) AWP39 ... and possibly SNA organization viewed it as competition (even tho SNA had nothing to do with networking).
Later when she had been con'ed into going to POK to be in charge of
loosely-coupled architecture ... she had lots of battles with the SNA
organization. Eventually there was a (temporary) truce where she was
allowed to do anything she wanted within the perimeter of the datacenter
walls ... but SNA had to be used anytime the datacenter walls were crossed.
While in POK, she created Peer-Coupled Shared Data architecture
... some past posts
https://www.garlic.com/~lynn/submain.html#shareddata
which, except for IMS hot-standby, saw very little uptake until sysplex.
Somewhat related recent post mentioning working on PU4/PU5 emulation
product .... that carried RUs in real networking environment (but
converted at boundaries when talking to real pu5/vtam host)
https://www.garlic.com/~lynn/2009k.html#70 An inComplete History Of Mainframe Computing
above references this decade old post ... which includes part of
project presentation that I gave at fall86 SNA architecture review
board meeting in Raleigh
https://www.garlic.com/~lynn/99.html#67
as an aside ... the internal network ... which wasn't SNA ... was larger
than the arpanet/internet from just about the beginning until possibly
late-85 or early-86
https://www.garlic.com/~lynn/subnetwork.html#internalnet
post mentioning the 1000th node (on the internal network) in 1983 (83
was when the arpanet/internet was passing 255 hosts)
https://www.garlic.com/~lynn/2006k.html#8
above also lists internal datacenters/locations having one or more new/added networking nodes during 1983.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Card PINs traded at two for a dollar Date: 9 Aug, 2009 Blog: Financial Crime Risk, Fraud and SecurityCard PINs traded at two for a dollar
from above:
One vendor, a Russian, offers a Chinese customer free translations of
the product's instruction manual; another promises friendly
technical support
... snip ...
Note that multi-factor authentication is considered to be more secure because of assumption about independent threats and vulnerabilities to the different factors.
something you know PINs are considered countermeasure to lost/stolen something you have tokens (modulo the enormous proliferation in requirement for unique pin/passwords resulting in individuals with large scores of pin/passwords to remember and one study that 30% of pin-based payment cards have the pin written on them). something you have tokens are countermeasure to evesdropping (static) something you know PIN/password.
However, the rise of skimming and the trivial ease that magstripe something you have tokens can be counterfeited ... has resulted in common vulnerability where payment processing end-points have been compromised to skim both magstripe and PINs at the same time (negating the assumption that multi-factor is more secure because of independent vulnerabilities).
misc. past post mentioning 3-factor authentication model
https://www.garlic.com/~lynn/subintegrity.html#3factor
and past posts mentioning skimming/harvesting (static) information
https://www.garlic.com/~lynn/subintegrity.html#harvest
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Internal fraud isn't new, but it's news. Date: 9 Aug, 2009 Blog: Financial Crime Risk, Fraud and SecurityInternal fraud isn't new, but it's news.
To some extent ... internal fraud ... and data breaches are similar (some studies that 70% of identity theft data breaches have involved insiders). It apparently was the lack of attention and/or corrective action that resulted in the cal. data breach notification legislation; anticipating that the associated publicity might motivate corrective action.
We were tangentially involved in the legislation, having been brought in to help word-smith the cal. electronic signature legislation. Some of the institutions involved in the signature legislation were also involved in privacy issues and had done in-depth customer surveys on the subject of individual/consumer privacy ... and the clear #1 issue was "identity theft" ... in large part, account fraud & fraudulent transactions resulting from data breaches.
They also had concurrent legislative efforts to require "opt-in" for
information sharing (i.e. require explicit individual authorization to
share information) ... until preempted by the "opt-out" previsions of
GLBA (bank modernization act) ... which also repealed Glass-Steagall
(repeal implicated in the current economic troubles)
http://content.time.com/time/specials/packages/article/0,28804,1877351_1877350_1877330,00.html
recent posts also mentioning GLBA reference:
https://www.garlic.com/~lynn/2009c.html#38 People to Blame for the Financial Crisis
https://www.garlic.com/~lynn/2009c.html#49 How to defeat new telemarketing tactic
https://www.garlic.com/~lynn/2009c.html#53 How to defeat new telemarketing tactic
https://www.garlic.com/~lynn/2009c.html#55 Who will give Citigroup the KNOCKOUT blow?
https://www.garlic.com/~lynn/2009c.html#65 is it possible that ALL banks will be nationalized?
https://www.garlic.com/~lynn/2009d.html#10 Who will Survive AIG or Derivative Counterparty Risk?
https://www.garlic.com/~lynn/2009d.html#28 I need insight on the Stock Market
https://www.garlic.com/~lynn/2009d.html#61 Quiz: Evaluate your level of Spreadsheet risk
https://www.garlic.com/~lynn/2009d.html#62 Is Wall Street World's Largest Ponzi Scheme where Madoff is Just a Poster Child?
https://www.garlic.com/~lynn/2009d.html#63 Do bonuses foster unethical conduct?
https://www.garlic.com/~lynn/2009d.html#73 Should Glass-Steagall be reinstated?
https://www.garlic.com/~lynn/2009e.html#0 What is swap in the financial market?
https://www.garlic.com/~lynn/2009e.html#8 The background reasons of Credit Crunch
https://www.garlic.com/~lynn/2009e.html#13 Should we fear and hate derivatives?
https://www.garlic.com/~lynn/2009e.html#23 Should FDIC or the Federal Reserve Bank have the authority to shut down and take over non-bank financial institutions like AIG?
https://www.garlic.com/~lynn/2009e.html#35 Architectural Diversity
https://www.garlic.com/~lynn/2009f.html#29 What is the real basis for business mess we are facing today?
https://www.garlic.com/~lynn/2009f.html#38 On whom or what would you place the blame for the sub-prime crisis?
https://www.garlic.com/~lynn/2009f.html#51 On whom or what would you place the blame for the sub-prime crisis?
https://www.garlic.com/~lynn/2009f.html#53 What every taxpayer should know about what caused the current Financial Crisis
https://www.garlic.com/~lynn/2009g.html#5 Do the current Banking Results in the US hide a grim truth?
https://www.garlic.com/~lynn/2009g.html#7 Just posted third article about toxic assets in a series on the current financial crisis
https://www.garlic.com/~lynn/2009g.html#33 Treating the Web As an Archive
https://www.garlic.com/~lynn/2009g.html#76 Undoing 2000 Commodity Futures Modernization Act
https://www.garlic.com/~lynn/2009h.html#17 REGULATOR ROLE IN THE LIGHT OF RECENT FINANCIAL SCANDALS
https://www.garlic.com/~lynn/2009i.html#54 64 Cores -- IBM is showing a prototype already
https://www.garlic.com/~lynn/2009i.html#60 In the USA "financial regulator seeks power to curb excess speculation."
https://www.garlic.com/~lynn/2009i.html#74 Administration calls for financial system overhaul
https://www.garlic.com/~lynn/2009i.html#77 Financial Regulatory Reform - elimination of loophole allowing special purpose institutions outside Bank Holding Company (BHC) oversigh
https://www.garlic.com/~lynn/2009j.html#21 The Big Takeover
https://www.garlic.com/~lynn/2009j.html#30 An Amazing Document On Madoff Said To Have Been Sent To SEC In 2005
https://www.garlic.com/~lynn/2009j.html#35 what is mortgage-backed securities?
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Cyber attackers empty business accounts in minutes Date: 9 Aug, 2009 Blog: Financial Crime Risk, Fraud and Securityre:
The EU FINREAD standard from the 90s was to address numerous easily done PC compromises. Things haven't change much since then.
There are public key and token solutions that address the issues and properly done ... have little incremental costs. Poorly done implementations in the past have raised issues about costs and overhead (in part the prospect that there might have to be a whole series of deployments ... before finally getting it right ... and in some case the deployment problems were so severe that call-center problem calls were one of the largest expenses).
Browser in a sandbox ... a flavor is provided by virtualization ... and then any compromises are discarded at the end of the session by evaporating the virtual environment. This dates back more than 40yrs ... I was a late comer to virtualization, only having been involved since January, 1968.
Many banks in the mid-90s rolled out PKIs ... but it was overly
expensive and didn't fully address the issues. One of the issues was
certificate-based PKI for payment transactions ... where the
certificate gorp represented a 100 times increase in typical payment
transaction payload ... and it could be shown that the appended
certificate gorp was redundant and superfluous (besides representing
factor of 100 times increase in payload size). A number of recent news
item discussions:
https://www.garlic.com/~lynn/2009k.html#21 Security certificate warnings don't work, researchers say
https://www.garlic.com/~lynn/2009k.html#38 More holes found in Web's SSL security protocol
and in the financial cryptography blog:
https://www.garlic.com/~lynn/2009k.html#33 Trouble in PKI land
and in the cryptography mailing list
https://www.garlic.com/~lynn/2009k.html#72 Client Certificate UI for Chrome?
Note that in the original use of SSL for "electronic commerce" (mentioned previously), we had mandated mutual authentication (between the webservers and the payment gateway) ... which hadn't been previously implemented. However, by the time we were done with the implementation and deployment ... it was clear that the digital certificates were redundant and superfluous (besides a payload bloat) ... since all parties had to have been previously registered with accounts.
This is as simple as registering public keys in lieu of PINs or
password (and eliminates all the rest of the PKI overhead, expense,
complexity, processing & payload penalties). We've been able to
demonstrate it with the x9.59 financial standard protocol
https://www.garlic.com/~lynn/x959.html#x959
as well as for RADIUS ... some past posts
https://www.garlic.com/~lynn/subpubkey.html#radius
and Kerberos ... some past posts
https://www.garlic.com/~lynn/subpubkey.html#kerberos
... aka both transaction as well as session oriented protocols.
aka public key, certificate-less operation
https://www.garlic.com/~lynn/subpubkey.html#certless
Note that the trivial, widespread, and well known compromises of PCs
connected to the internet ... isn't directly an MITM attack
... it is compromise of a internet end-point node (the PC).
https://www.garlic.com/~lynn/subintegrity.html#mitmattack
The EU FINREAD standard from the 90s was attempt to define a
physically separate, secure box that was immune to and could
compensate for the PC compromises (i.e. EU FINREAD
was countermeasure to end-point compromises).
https://www.garlic.com/~lynn/subintegrity.html#finaread
Typically a MITM-attack is something that attempts to sit in the middle and transparently spoof or impersonate the two parties to each other... i.e. one of the SSL MITM-attacks ... involve the user clicking on a URL that takes it to a website claiming to be their online banking site (even with a valid SSL and digital certificate). The bogus website then forwards the user's communication ... via a separate SSL connection to the real online banking site. The bogus MITM website is transparently emulating the user to the real online banking site (using the real user's communication) and transparently emulating the real online banking site to the user (by using the real online banking communication).
This is different from (and actual simpler) than spoofing an online banking website ... where the attacker makes a copy of all the data at the real online site at a bogus website. In the MITM-attack scenario ... all it needs is a modified version of standard proxy code ... and doesn't need copies of the online banking website ... since it transparently forwards the real information.
In the EU FINREAD case, an independent, secure, physical box is used to validate whether the transaction (done by the PC end-point) is correct or not correct ... allowing the user to either go ahead and approve the operation or not. In the PC compromise scenario, the software, the keyboard traffic and the screen display can all be compromised.
a more inflammatory aritcle on the subject:
It's time to get rid of Windows
http://blogs.computerworld.com/14510/its_time_to_get_rid_of_windows
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: VTAM security issue Newsgroups: bit.listserv.ibm-main Date: Mon, 10 Aug 2009 07:24:12 -0400patrick.okeefe@WAMU.NET (Patrick O'Keefe) writes:
it was the communication division ... not the networking division.
vtam/ncp (pu5/pu4) formed a communication hierarchy.
networking has tended to apply to talking to other peers ... it was the source of my comment that only in situation where "network" had been co-opt to apply to communication hierarchy that it was necessary to qualify AWP39 networking architecture by "peer-to-peer networking architecture".
arpanet ... prior to great converstion to tcp/ip on 1jan83 ... had hosts and network nodes (IMPS). The IMPS talked to other IMPS (network nodes) ... and they talked to attached devices (which happened to be hosts, later there were also "terminal" IMPS). IMPS (network nodes) exchanged information about what other IMPS (network nodes) they were connected to ... so it was possible to dynamically discover network nodes and paths to network nodes and paths to connected devices/hosts.
The IMPS/host configuration had some physical similarity to pu4/pu5 ... but the IMPS were full network nodes that managed the dynamic network configuration and then passed communication to/from destination hosts. At the time of the great change over ... there was something like 100 IMPS (network nodes) and 255 HOSTs.
TCP/IP has both a network layer and an internetworking layer (gateways and other conventions for supporting the internetworking of networks). At the TCP/IP network layer there is ARP (address resolution protocol) and ARP caches (dynamic maps of IP-addresses to "data link" ... if using the OSI madel), as well as some maintenance/control gorp with ICMP messages (redirects, not reachable, etc). These are networking layer/node to networking layer/node
The great change-over to TCP/IP effectively merged the IMP function into the hosts ... with hosts executing networking layer code and higher layer code. It was possible to do (dynamic, networking layer) router function in the hosts and/or custom "router" boxes. The "router" boxes could also provide the gateway/internetworking function for internetworking of networks.
the internal network ... some past posts mentioning internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
originated at the science center (virtual machines, lots of interactive
computing, GML ... precursor to SGML, HTML, etc, lots of other stuff)
... some past posts mentioning science center
https://www.garlic.com/~lynn/subtopic.html#545tech
I've claimed that one of the reasons that the internal network (not SNA) was larger than the arpanet/internet from just about the beginning until sometime late 85 (or early 86) was that the prevailing internal networking nodes contained a form of gateway function (significantly easing the addition of nodes in different parts of the network) ... which arpanet/internet didn't get until the 1jan83 great change-over.
We were in a booth at interop88 ... but not the ibm booth ... and starting late sunday night before the start of the show ... the (four) floor nets started crashing ... this continued into the wee hours of monday morning before being diagnosed and corrected. The problem was that a large number of nodes (lots & lots of workstations) had connections to all four floor nets ... and all had "ip-forwarding" turned on by default (router function). Any ARP request was automagically being rebroadcast by nearly all nodes on all networks ... which led to ARP storms. In the wake of interop88 ... RFC1122 (IETF internet standard) specified that IP-forwarding should be turned off by default.
my internet standard index
https://www.garlic.com/~lynn/rfcietff.htm
summary entry for 1122
https://www.garlic.com/~lynn/rfcidx3.htm#1122
1122 S
Requirements for Internet hosts - communication layers, Braden R.,
1989/10/01 (116pp) (.txt=289148) (STD-3) (Updated by 4379) (See Also
1123)
....
misc. past posts mentioning interop88
https://www.garlic.com/~lynn/subnetwork.html#interop88
TCP/IP is the technology basis for the modern internet, NSFNET backbone
was the operational basis for the modern internet, and CIX was the
business basis for the modern internet. some number of past posts
mentioning NSFNET backbone
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
and old email related to NSFNET backbone activity
https://www.garlic.com/~lynn/lhwemail.html#nsfnet
Note that the domain name system provides a separate naming indirection
that rides on-top and independent of tcp/ip (for things like URL/domain
name to IP-address mapping). For a little archeological folklore, the
person originally responsible for the domain name system did a stint at
the science center in the early 70s. other old archeological references:
https://www.garlic.com/~lynn/aepay11.htm#43 Mockapetris agrees w/Lynn on DNS security - (April Fool's day??)
https://www.garlic.com/~lynn/aepay11.htm#45 Mockapetris agrees w/Lynn on DNS security - (April Fool's day??)
https://www.garlic.com/~lynn/aepay12.htm#18 DNS inventor says cure to net identity problems is right under our nose
https://www.garlic.com/~lynn/2008r.html#42 Online Bill Payment Website Hijacked - Users were redirected
https://www.garlic.com/~lynn/2008s.html#39 The Internet's 100 Oldest Dot-Com Domains
NAT (network address translation) ... is a gateway/router "add-on" function ... which at network(ing) gateway boundary remaps traffic IP addresses. The router will take an internal networking address ... like a 10-net ... and remap it to appear as if it is coming from the router's internet/gateway IP-address. NAT'ing requires that the gateway router keep track of the mapping of internal ip-addresses to (internet) sessions using its own address ... in order to correctly rewrite the ip header address i.e. potentially a large number of differen tinternal IP-addresses has been modified to appear as if it is coming from the same router/gateway internet ip-address. Returning traffic will all be directly addressed to that router/gateway internet ip-address and needs to be correctly re-addressed for forwarding to the appropriate destination (the router needs to keep lots of administrative gorp to be able to correctly re-address & forward incoming, returning traffic).
from my IETF internet standards index:
https://www.garlic.com/~lynn/rfcietff.htm
click on Term (term->RFC#) in the RFCs listed by section. Then
click on "NAT" in the Acronym fastpath
network address translation
5555 5508 5389 5382 5207 5135 5128 4966 4787 4380 4008 3947 3715
3519 3489 3424 3235 3105 3104 3103 3102 3027 3022 2993 2766 2709
2694 2663 2428 2391 1631
in the index, "clicking" on the RFC number brings up the summary
in the lower index. "clicking" on the ".txt=nnn" field in a summary
entry retrieves that actual RFC.
NOTE that NAT'ing is similar ... but different to VPN (virtual private network) implementation.
In the early 80s, I started a project I called HSDT ... some past posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
and we were running T1 & higher speed backbone links. HSDT was also getting some custom built hardware to spec. ... some from overseas.
In the mid-80s, somebody in the communication division announced a new internal communication discussion group (on the internal network) ... the announcement went out on Friday before I was to take a trip to the other side of the pacific and included the following definitions:
low-speed <9.6kbits medium-speed 19.2kbits high-speed 56kbits very high-speed 1.5mbitsthe following Monday morning, on a conference wall on the other side of the pacific was:
low-speed <20mbits medium-speed 100mbits high-speed 200-300mbits very high-speed >600mbitsAs indicated in the NSFNET related email ... we had been working with NSF on getting a NSFNET backbone with at least T1 links (since we could point to HSDT operational internal links at T1 and higher speeds).
One of the other differences between the internal network and the "internet" was that corporate required encryption on all links leaving corporate premise. This was a real hassle for some links at various places around the world ... especially when they connected corporate locations in different countries. In any case, at some point in the mid-80s, there was a comment that the internal network had over half of all the link encryptors in the world. One of the issues in HSDT was at the time, it was fairly straight-forward to get link encryptors for 56kbit and slower speed links. It became increasingly difficult to get link encryptors that ran at T1 and higher speeds. At one point, I got involved in designing our own encryption hardware for HSDT.
For other archeological topic drift ... when we were doing ha/cmp
product ... some past posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
one of the transparent fall-over strategies was to do ip-address take-over (aka the take-over box acquires the ip-address of the failed box). we found a bug in the most commonly deployed tcp/ip code (large number of vendors were using BSD TCP/IP 4.3 TAHOE or RENO). Normally the ARP code requires that the ARP cache entries be periodically timed-out and be forced to be refreshed (which our ip-address "take-over" function was dependent on). However, BSD 4.3 TAHOE/RENO had a "bug" in how ARP entries might work under specific conditions and an ARP entry might never time-out (defeating HA/CMP fall-over attempt to do ip-address take-over). We had to come-up with a kludge work-around for the bug ... that would trick clients into doing ARP entry refresh (i.e. ip/network address to physical address).
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: August 7, 1944: today is the 65th Anniversary of the Birth of the Computer Newsgroups: alt.folklore.computers Date: Mon, 10 Aug 2009 07:49:30 -0400Chris Barts <chbarts+usenet@gmail.com> writes:
There is some resurgence of virtual machine use ... with some number of shared virtual machine sessions concurrent on real machines.
CMS was originally developed as a single individual, personal computing environment on a dedicated 360/40 (non-shared) personal computer (mainframe, individual sitting at the dedicated 360/40 1052-7 keyboard).
CMS development went on concurrently (on real 360/40, non-shared personal computer mainframe) with the 360/40 hardware changes to support virtual memory and cp40 development to support virtual machines. When cp40 virtual machine support was far enough along, CMS personal computing development was able to move into a (shared) virtual machine environment (not all that different than a lot of virtual machine activity today).
Later, the science center acquired a 360/67 (that came standard with virtual memory hardware support) ... and virtual machine cp40 morphed into virtual machine cp67. however, cms design point continued to pretty much remain the dedicated (360/40 mainframe) personal computing environment ... that happened to be running in a virtual machine.
A lot of the early CMS personal computing characteristics had come from CTSS ... which I've claimed that both UNIX and MULTICS trace back to (as common ancestor). I've commented that in the late 80s, I had to deal with some unix (scheduling) code that was nearly identical to some cp67 code that I had replaced two decades earlier (and conjectured that possibly both had inherited it from CTSS).
for some additional details ... see Melinda's history ... which goes
into more detail regarding CTSS, MULTICS, and early virtual machine
period:
https://www.leeandmelindavarian.com/Melinda#VMHist
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: August 7, 1944: today is the 65th Anniversary of the Birth of the Computer Newsgroups: alt.folklore.computers Date: Mon, 10 Aug 2009 09:05:50 -0400re:
traditional cms virtual machine configuration definition basically defined the virtual mapping to real devices. basic cms configuration was the 360/40 dedicated personal computing mainframe, 1052-7 keyboard, 256kbytes (virtual) memory, 2540 card reader, 2540 card punch, 1403 printer, 2311 (mini-)disks.
cp40 (and later cp67) virtual machine support handled the translation of 2741 keyboard/terminal to emulated mainframe 1052-7 keyboard. It also handled mapping of the virtual unit record devices to real devices (again, analogous to what goes on in most current day virtual machine implementations).
in the 70s, there were some pathlengths short-cuts added to cp67 with cms having support to differentiate whether it was running on a "real" machine (or virtual equivalent) ... or a virtual machine with the pathlength shortcuts. however, cms could still be ipled/booted and executed on a dedicated, real-machine, personal computer mainframe (not being shared for any other purpose). it was part of the morph to vm370 that cms was artificially crippled so that it would no longer ipl/boot on a real machine ... along with changing cms from "cambridge monitor system" to "conversational monitor system".
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: August 7, 1944: today is the 65th Anniversary of the Birth of the Computer Newsgroups: alt.folklore.computers Date: Mon, 10 Aug 2009 10:29:28 -0400re:
as aside, by jan68 when 3 people came out from the science center to install cp67 at the univ., I had thousands of hands-on "personal computer" time using 360/30 and 360/67 mainframe.
earlier, the univ. had 709 running ibsys ... and installed a 360/30 as part of plan to move from 709/ibsys to 360/67 running tss/360. the 360/30 had 1401 hardware emulation mode for the 1401 it replaced (as unit-record front-end for 709). It was planned that the 360/30 would be used for some amount of time in 360 mode ... to gain 360 experience at the univ.
During this period ... the univ would also shutdown the computing center at 8am saturday ... and wouldn't re-open until 8am monday. As a result, they would let me have the computer center and everything in it for 48hrs straight (monday classes were little hard having not slept for 48hrs). On the weekends, I had the computing center all to myself and I got to use the 64kbyte 360/30 as my personal computer for 48hrs straight ... it was similar to later 64kbyte personal computers ... except in a larger form factor. Later, the 360/30 was replaced with 360/67 ... and I was still allowed to have the computing center all to myself for the weekend and used the 768kbyte 360/67 as my personal computer.
in any case, by the time cp67 was installed at the univ. ... I already had thousands of hrs of hands-on mainframe personal computer experience.
misc. past posts mentioning having use of 360/30 as my personal computer
https://www.garlic.com/~lynn/98.html#55 Multics
https://www.garlic.com/~lynn/2004d.html#10 IBM 360 memory
https://www.garlic.com/~lynn/2004g.html#0 Usenet invented 30 years ago by a Swede?
https://www.garlic.com/~lynn/2005b.html#18 CAS and LL/SC
https://www.garlic.com/~lynn/2005b.html#54 The mid-seventies SHARE survey
https://www.garlic.com/~lynn/2005h.html#35 Systems Programming for 8 Year-olds
https://www.garlic.com/~lynn/2005n.html#8 big endian vs. little endian, why?
https://www.garlic.com/~lynn/2006k.html#27 PDP-1
https://www.garlic.com/~lynn/2006o.html#43 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2008r.html#19 What if the computers went back to the '70s too?
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: lynn@GARLIC.COM (Anne & Lynn Wheeler) Subject: Re: VTAM security issue Newsgroups: bit.listserv.ibm-main Date: 10 Aug 2009 07:37:27 -0700re:
the communication division did provide the basis for rapid uptake of personal computers via terminal (communication) emulation. A customer could get an ibm/pc with terminal (communication) emulation for about the same price as a 3270 terminal ... which would both provide for 3270 terminal function as well as some local computing ... in a single desktop foot-print. For a lot of customers ... is was nearly brain-dead task to change already justified 3270 terminals to IBM/PCs ... significantly simplifying what was needed to order tens of thousands of IBM/PCs.
The result, was the communication division grew a significantly terminal emulation install base. Later, as PCs became more powerful and were starting to implement all sorts of networking features ... there was determined effort by the communication division to preserve its terminal emulation install base.
The result was that lots of data started leaking out of the datacenter into distributed environment ... because the limited "terminal emulation" spigot was starting to represent a significant business bottleneck. The disk division developed some number of powerful solutions ... which were constantly being vetoed by the communication division (my earlier reference to the truce with my wife that everything that crossed the datacenter walls had to be SNA).
In the late 80s, the situation had gotten so bad that a senior engineer from the disk division managed to get a talk scheduled at the annual, world-wide, internal communication division conference. They began their talk with the statement that the head of the communication division was going to be responsible for the demise of the disk division (because the terminal emulation communication was becoming an increasingly severe bottleneck for use of the mainframe data and so users were taking the data completely out of the mainframe/datacenter to more friendly environs).
misc. past posts discussing the terminal emulation (communication)
issue
https://www.garlic.com/~lynn/subnetwork.html#emulation
in that same time-frame we had come up with 3-tier (middle layer,
middleware, etc) networking architecture and were out pitching it to
customer executives. We were taking lots of barbs from the
communication division ... that was trying to preserve the terminal
emulation communication paradigm. misc. past posts about 3-tier
networking architecture
https://www.garlic.com/~lynn/subnetwork.html#3tier
this also played a factor in the internet exceeding the size of the internal network in late '85 (possibly early '86). arpanet/internet saw big limitating factor removed with the great change-over to interworking protocol on 1jan83. This somewhat put it on level footing with internal networking with gateway capability (w/o gateways, there tends to be much higher level of end-to-end coordination and management of all the components in the network, representing a barrier to adding new components).
Then tcp/ip full networking capability was being deployed on workstations and PCs ... while internally, such computers were limited to connectivity by terminal emulation communication. The number of internet network nodes really exploded by including workstations and PCs.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: August 7, 1944: today is the 65th Anniversary of the Birth of the Computer Newsgroups: alt.folklore.computers Date: Mon, 10 Aug 2009 13:13:00 -0400Charles Richmond <frizzle@tx.rr.com> writes:
I had done huge amount of pathlength optimization, general I/O thruput optimization, page replacement algorithm optimization and page I/O optimization for cp67.
I also did dynamic adaptive resource management ... frequently referred to as fair share scheduling ... because the default policy was resource fair share.
Grenoble science center had 360/67 similar to cambridge science center machine ... but had 1mbyte of real storage instead of only 768kbytes (Grenoble machine netted 50% more memory for paging ... after cp67 fixed storage requires ... than the cambridge machine).
At one point, Grenoble took cp67 and modified it to implement a relatively straight forward "working set" dispatching ... pretty much as described in the computer literature of the time ... and published an article in ACM with some amount of detailed workload, performance and thruput study.
It turned out that the (modified) Grenoble cp67 system with 35 users (and 50% more real storage for paging), running nearly same workload, got about the same thruput as the Cambridge cp67 system did with 80 users (and my dynamic adaptive resource management).
On the Cambridge system, trivial interactive response degraded much more gracefully ... even under extremely heavy workloads (including during extended periods of 100% cpu utilization, high i/o activity and high paging i/o activity).
misc. past posts mentioning cambridge science center
https://www.garlic.com/~lynn/subtopic.html#545tech
misc. past posts mentioning grenoble science center:
https://www.garlic.com/~lynn/93.html#7 HELP: Algorithm for Working Sets (Virtual Memory)
https://www.garlic.com/~lynn/94.html#1 Multitasking question
https://www.garlic.com/~lynn/99.html#18 Old Computers
https://www.garlic.com/~lynn/2001h.html#26 TECO Critique
https://www.garlic.com/~lynn/2001l.html#6 mainframe question
https://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002o.html#30 Computer History Exhibition, Grenoble France
https://www.garlic.com/~lynn/2002q.html#24 Vector display systems
https://www.garlic.com/~lynn/2003f.html#50 Alpha performance, why?
https://www.garlic.com/~lynn/2004.html#25 40th anniversary of IBM System/360 on 7 Apr 2004
https://www.garlic.com/~lynn/2004c.html#59 real multi-tasking, multi-programming
https://www.garlic.com/~lynn/2004g.html#13 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004q.html#73 Athlon cache question
https://www.garlic.com/~lynn/2005d.html#37 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005d.html#48 Secure design
https://www.garlic.com/~lynn/2005f.html#47 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005h.html#10 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005h.html#15 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005n.html#23 Code density and performance?
https://www.garlic.com/~lynn/2006e.html#7 About TLB in lower-level caches
https://www.garlic.com/~lynn/2006e.html#37 The Pankian Metaphor
https://www.garlic.com/~lynn/2006f.html#0 using 3390 mod-9s
https://www.garlic.com/~lynn/2006i.html#31 virtual memory
https://www.garlic.com/~lynn/2006i.html#36 virtual memory
https://www.garlic.com/~lynn/2006i.html#37 virtual memory
https://www.garlic.com/~lynn/2006i.html#42 virtual memory
https://www.garlic.com/~lynn/2006j.html#1 virtual memory
https://www.garlic.com/~lynn/2006j.html#17 virtual memory
https://www.garlic.com/~lynn/2006j.html#25 virtual memory
https://www.garlic.com/~lynn/2006l.html#14 virtual memory
https://www.garlic.com/~lynn/2006o.html#11 Article on Painted Post, NY
https://www.garlic.com/~lynn/2006q.html#19 virtual memory
https://www.garlic.com/~lynn/2006q.html#21 virtual memory
https://www.garlic.com/~lynn/2006r.html#34 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006w.html#46 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2007i.html#15 when was MMU virtualization first considered practical?
https://www.garlic.com/~lynn/2007s.html#5 Poster of computer hardware events?
https://www.garlic.com/~lynn/2007u.html#79 IBM Floating-point myths
https://www.garlic.com/~lynn/2007v.html#32 MTS memories
https://www.garlic.com/~lynn/2008c.html#65 No Glory for the PDP-15
https://www.garlic.com/~lynn/2008h.html#70 New test attempt
https://www.garlic.com/~lynn/2008h.html#79 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008r.html#21 What if the computers went back to the '70s too?
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: lynn@GARLIC.COM (Anne & Lynn Wheeler) Subject: Re: SNA: conflicting opinions Newsgroups: bit.listserv.ibm-main Date: 10 Aug 2009 10:54:29 -0700chrismason@BELGACOM.NET (Chris Mason) writes:
as an aside ... the network layers on different nodes can interact ... i.e. like ARP and ARP cache operations. most implementation also have service definitions which can be things that sit outside the layers (as defined in OSI model).
I was on the XTP technical advisery board and participated in taking "HSP protocol" (high-speed protocol) to x3s3.3 for standardization (x3s3.3 was US iso-chartered organization for level 3 & level 4 standards ... i.e. networking layer and transport layer). ISO had a policy that it would not accept work for standards that violated the OSI model.
HSP protocol was rejected by x3s3.3 for standardization because it violated OSI model:
1) HSP supported internetworking ... which doesn't exist in the OSI model ... and therefor couldn't be considered for standardization by ISO or ISO chartered organization. Internetworking corresponds approximately to a non-existing layer between layer 3/network and layer 4/transport.
2) HSP went directly from level 4/transport directly to LAN MAC interface ... bypassing the layer 3/4 (networking/transport) interface ... violating the OSI model ... and therefor couldn't be considered for standardization by ISO or ISO chartered organization
3) HSP went directly from level 4/transport directly to LAN MAC interface ... LAN MAC interface doesn't exist in the OSI model ... the LAN MAC interface sits approximately in the middle of the networking layer (LAN/MAC subsumes part of the feature/function defined in the network layer). Since HSP went directly to LAN MAC interface, something that doesn't exist in the OSI model, it couldn't be considered for standardization by ISO or ISO chartered organization.
misc. past posts mentioning XTP and HSP standardization work (and ISO
policy of not accepting standards work items that violate OSI model):
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
there is lots of past folklore about why OSI&ISO was the way it was.
part of the folklore is that it was done by traditional telco and communication people ... that actually had little experience trying to deal with interoperability between multiple different networks. The people in the govs. tended to even have much less experience dealing with interoperability between multiple networks (than the people in the standards group).
another distinction periodically cited was that IETF (the internet standardization organization) has required at least two different interoperable implementations before something could be standardized. ISO doesn't require any actual implementation prior to standardization (it is periodically claimed it is possible to have ISO standards that are impossible to implement).
I had mentioned interop88 ... misc. past posts
https://www.garlic.com/~lynn/subnetwork.html#interop88
... and the (US Federal) GOSIP mandates were starting to come on strong (internet will be eliminated and everything replaced by products that conformed to OSI/ISO standards) ... so even though interop88 was nominally annual internetworking (TCP/IP) ... lots of the vendors (& booths) were showing "OSI" products. Part of this was that many of the vendors had little more experience with the difficulty of interoperating between networks (than the GOSIP people, other gov. people and/or the ISO/OSI standards people).
a few old posts mentioning bits & pieces of GOSIP:
https://www.garlic.com/~lynn/2000d.html#70 When the Internet went private
https://www.garlic.com/~lynn/2002g.html#21 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#30 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2006k.html#6 Hey! Keep Your Hands Out Of My Abstraction Layer!
https://www.garlic.com/~lynn/2006k.html#45 Hey! Keep Your Hands Out Of My Abstraction Layer!
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: August 7, 1944: today is the 65th Anniversary of the Birth of the Computer Newsgroups: alt.folklore.computers Date: Tue, 11 Aug 2009 09:30:04 -0400Chris Barts <chbarts+usenet@gmail.com> writes:
old communication discussing implementation for PGP-like email:
https://www.garlic.com/~lynn/2007d.html#email810506
https://www.garlic.com/~lynn/2006w.html#email810515
as i've mentioned before, internal network was larger than internal
network from just about the beginning until possibly late 85 (or early
86) ... misc. past posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
one of the issues is that corporate guidelines required encryption on all links that left corporate premise. in the mid-80s, there was comments that the internal network had over half of all the link encryptors in the world. a really difficult issue was with links that went between two corporate locations in different countries (and crossed national boundaries). there was lots of issues with gov. bodies (all over the world) talking them into approving the encryption.
recent post in mainframe mailing list that started with purely old mainframe issues ... but then strayed into other networking (& encryption issues).
one of the things mentioned was that it was fairly easy to get
link encryptors for up to 56kbits/sec ... but in HSDT
https://www.garlic.com/~lynn/subnetwork.html#hsdt
we were dealing with T1 and higher speed links ... and it became increasingly difficult to get encryption at the higher speeds ... so at one point I became involved in designing custom, high-speed encryption hardware (eventually I concluded ... in that period ... there was three kinds of encryption, the kind they don't care about, the kind you can't do, and the kind that you can only do for them ... this was after I was told that I could produce as many boxes as I wanted ... but the gov. would be the only customer; had nothing at all to do with whether it was shared computer or not).
We were also involved in various organizations about what the NSFNET
backbone should look like (tcp/ip is the technology basis for the modern
internet, NSFNET backbone was the operational basis for modern
internetworking of internets and CIX was business basis for modern
internet). We could point to our operational T1 & higher speed backbone
links ... so the NSFNET backbone RFP specified T1. As it turned out, we
weren't allowed to bid on the NSFNET backbone (even with support by
director of NSF and others) ... and the "winning" RFP didn't actually
install T1 links (somewhat to meet the letter of the RFP, they installed
T1 trunks and used telco multiplexors to run multiple 440kbit links over
the T1 turnks). misc. old email mentioning various NSFNET backbone
related things
https://www.garlic.com/~lynn/lhwemail.html#nsfnet
and misc. posts mentioning various nsfnet related stuff
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
for other drift, tymshare was large commercial (virtual machine,
vm370-based) online service bureau ... and they had customers all over
the world ... some of which required encryption to protect the data.
Tymshare wasn't the only commercial vm370-based online service bureau
with lots of large commercial companies ... including large financial
institutions ... that required very high level of security and integrity
(and also secrecy/autonomy from the service bureau operator). misc.
past posts mentioning (virtual machine based) commercial online
service bureaus:
https://www.garlic.com/~lynn/submain.html#timeshare
tymshare also provided their online computer conferencing for free to
the SHARE organization starting in Aug76 ... URL for VMSHARE archives:
http://vm.marist.edu/~vmshare/
the issue wasn't whether it was, or was not, shared computers ... it was whether got on radar of gov. organizations.
in the early 90s, we were brought in to consult with a small client/server company that wanted to do payment transactions on their server ... and they had invented this technology called "SSL" they wanted to use; the result is now frequently called "electronic commerce". As part of doing payment transactions we had to do a lot of work about how the technology was actually being applied to the payment business processes.
there was some amount of gov. involvement & seemed to be more related to the number of people using the technology rather than whether it was a private computer or not (both early PGP and early SSL). I don't know if people remember the "key escrow" uproar from the period ... we got invited to attend some number of the (official) "key escrow" organization meetings. There was also lots of issues with many other govs. around the world (as the internal network encountered).
some recent threads/posts mentioning that SSL era:
https://www.garlic.com/~lynn/2009j.html#5 Database Servers: Candy For Hackers
https://www.garlic.com/~lynn/2009j.html#11 Is anyone aware of a system that offers three layers of security and ID protection for online purchases or even over the counter POS purchases?
https://www.garlic.com/~lynn/2009j.html#13 PCI SSC Seeks Input on Security Standards
https://www.garlic.com/~lynn/2009j.html#20 Kaminsky interview: DNSSEC addresses cross-organizational trust and security
https://www.garlic.com/~lynn/2009j.html#23 Database Servers: Candy For Hackers
https://www.garlic.com/~lynn/2009j.html#25 Database Servers: Candy For Hackers
https://www.garlic.com/~lynn/2009j.html#33 IBM touts encryption innovation
https://www.garlic.com/~lynn/2009j.html#41 How can we stop Credit card FRAUD?
https://www.garlic.com/~lynn/2009j.html#48 Replace the current antiquated credit card system
https://www.garlic.com/~lynn/2009j.html#56 Replace the current antiquated credit card system
https://www.garlic.com/~lynn/2009j.html#57 How can we stop Credit card FRAUD?
https://www.garlic.com/~lynn/2009k.html#21 Security certificate warnings don't work, researchers say
https://www.garlic.com/~lynn/2009k.html#23 Security certificate warnings don't work, researchers say
https://www.garlic.com/~lynn/2009k.html#25 Don't Take Fraud Out of Context
https://www.garlic.com/~lynn/2009k.html#33 Trouble in PKI land
https://www.garlic.com/~lynn/2009k.html#38 More holes found in Web's SSL security protocol
https://www.garlic.com/~lynn/2009k.html#46 More holes found in Web's SSL security protocol
https://www.garlic.com/~lynn/2009k.html#53 A Complete History Of Mainframe Computing
https://www.garlic.com/~lynn/2009k.html#54 The satate of software
https://www.garlic.com/~lynn/2009k.html#60 The satate of software
https://www.garlic.com/~lynn/2009k.html#63 The satate of software
https://www.garlic.com/~lynn/2009k.html#64 PayPal hit by global outage
https://www.garlic.com/~lynn/2009k.html#72 Client Certificate UI for Chrome?
https://www.garlic.com/~lynn/2009k.html#77 Cyber attackers empty business accounts in minutes
https://www.garlic.com/~lynn/2009l.html#6 Cyber attackers empty business accounts in minutes
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: lynn@GARLIC.COM (Anne & Lynn Wheeler) Subject: Re: SNA: conflicting opinions Newsgroups: bit.listserv.ibm-main Date: 11 Aug 2009 07:08:02 -0700re:
a "network" layer with feature like ARP ... also enables BOOTP&DHCP ... with things like "reverse ARP" ... a machine dynamically loading their network characteristic from service in local environment. bootp/dhcp provides for dynamic network configuration for things like roaming laptops in wireless environment.
from my IETF RFC index:
https://www.garlic.com/~lynn/rfcietff.htm
click on Term (term->RFC#) in the RFCs listed by section; then
in the Acronym fastpath section there is "BOOTP":
bootstrap protocol (BOOTP )
see also configuration , reverse address resolution protocol
2132 1542 1534 1533 1532 1497 1395 1084 1048 951
"DHCP":
dynamic host configuration protocol (DHCP )
see also configuration , host , reverse address resolution protocol
5505 5460 5417 5223 5192 5139 5107 5071 5010 5007 4994 4941 4833 4776
4704 4703 4702 4701 4676 4649 4580 4578 4477 4436 4390 4388 4361 4332
4280 4243 4242 4174 4076 4039 4030 4014 3993 3942 3927 3925 3898 3825
3736 3679 3646 3634 3633 3594 3527 3495 3456 3442 3397 3396 3361 3319
3315 3256 3203 3118 3074 3046 3041 3011 3004 2939 2937 2855 2610 2563
2489 2485 2322 2242 2241 2132 2131 1541 1534 1533 1531
and "RARP"
reverse address resolution protocol (RARP )
see also address resolution
5505 5460 5417 5223 5192 5139 5107 5071 5010 5007 4994 4941 4833 4776
4704 4703 4702 4701 4676 4649 4580 4578 4477 4436 4390 4388 4361 4332
4280 4243 4242 4174 4076 4039 4030 4014 3993 3942 3927 3925 3898 3825
3736 3679 3646 3634 3633 3594 3527 3495 3456 3442 3397 3396 3361 3319
3315 3256 3203 3118 3074 3046 3041 3011 3004 2939 2937 2855 2610 2563
2489 2485 2322 2242 2241 2132 2131 1931 1542 1541 1534 1533 1532 1531
1497 1395 1084 1048 951 903
...
clicking on RFC number brings up the corresponding summary in the lower frame, clicking on the ".txt=nnn" field (in the summary) retrieves the actual RFC.
the attempts by the communication division to preserve the terminal communication paradigm ... showed up in lots of ways.
one of the ways was veto'ing numerous disk division's attempts to come out with advanced access products for the distributed environment (as part of stemming the flight of data out/off the mainframe) communication division could escalate to corporate with claim that they had responsibility for all products that involved communication crossing the datacenter boundary (at the peak, disk division was seeing double digit per annum percentage flight of data off the mainframe).
another trivial example were PC lan cards. high-powered workstations represented the quickly evolving direction for PCs ... requiring increasingly powerful networking.
the AWD division had done the PC/RT and had produced their own ISA 4mbit
T/R LAN card. When it came time for the rs/6000, they thought that they
would do their own microchannel 16mbit T/R LAN card ... but they were
wrong ... they were forced into using lots of the PS2 microchannel
adapter cards. It turns out that the PS2 microchannel 16mbit T/R LAN
card had design point of terminal emulation with 300 (or more) stations
all sharing the same 16mbit T/R bandwidth. As a result, the PS2
microchannel 16mbit T/R LAN card had lower per card thruput than the
PC/RT ISA 4mbit T/R LAN card (significantly restricting its ability to
participate in high-powered networking environments ... PC/RT ISA 4mbit
T/R card had higher thruput than the PS2 microchannel 16mbit T/R
card). references to old 3tier presentation from the late 80s
(attempting to give the terminal communication paradigm as much benefit
as possible in the comparison):
https://www.garlic.com/~lynn/96.html#17
https://www.garlic.com/~lynn/2002q.html#40
other posts mentioning 3-tier architecture
https://www.garlic.com/~lynn/subnetwork.html#3tier
misc. past posts discussing many of the issues and downside related
to the attempts to preserve the terminal communication paradigm.
https://www.garlic.com/~lynn/subnetwork.html#terminal
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: August 7, 1944: today is the 65th Anniversary of the Birth of the Computer Newsgroups: alt.folklore.computers Date: Tue, 11 Aug 2009 10:30:35 -0400Chris Barts <chbarts+usenet@gmail.com> writes:
a corollary could be found in games on vm370-based online computers.
there were a number of games on various internal computers ... but it really came to a head when I introduced "adventure" and made it available on the internal network. there was huge enormous uptick in game playing ... some sites claiming all work came to a halt while everybody was playing games.
there was a response to scour all internal machines and erradict all games. we were able to make the case ... 1) it would be much better to create a corporate game playing policy (than attempting to erradict all game playing) and 2) attempting to eradicate all game playing would just drive it undergrounad ... (increasingly inventive ways to hide the games from the eradication software ... somewhat analogous to existing race between software virus and anti-varus technologies).
a related game folklore tale from (vm370-based) tymshare ... was that when the ceo was told that tymshare was being increasingly used for playing games ... the ceo thought that game playing didn't project the appropriate commercial/business image ... and all games should be erradicated from tymshare. besides the issue of inventive users coming up with increasingly complex cloaking technologies (being impractical to actually exterminate all games) ... the issue apparently became mute when the ceo was told that game playing had increased to the point that it represented 1/3rd of tymshare revenue.
a few past posts mentioning adventure game:
https://www.garlic.com/~lynn/2001m.html#44 Call for folklore - was Re: So it's cyclical.
https://www.garlic.com/~lynn/2002m.html#57 The next big things that weren't
https://www.garlic.com/~lynn/2006y.html#18 The History of Computer Role-Playing Games
https://www.garlic.com/~lynn/2007m.html#4 Zork and Adventure
https://www.garlic.com/~lynn/2007m.html#6 Zork and Adventure
https://www.garlic.com/~lynn/2007o.html#15 "Atuan" - Colossal Cave in APL?
https://www.garlic.com/~lynn/2008s.html#12 New machine code
https://www.garlic.com/~lynn/2008s.html#14 New machine code
https://www.garlic.com/~lynn/2009i.html#16 looking for IBM's infamous "Lab computer"
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: lynn@GARLIC.COM (Anne & Lynn Wheeler) Subject: Re: SNA: conflicting opinions Newsgroups: bit.listserv.ibm-main Date: 11 Aug 2009 09:26:38 -0700re:
part of the terminal communication heritage was things like LU6.2 having 160,000 instruction pathlength and 15 buffer copies thru VTAM.
The execution of comparable tcp/ip function, on other platforms, had 5,000 instruction pathlength and 5 buffer copies. As networking technology advanced ... it was starting to assume more of the charactereristic of file i/o; for large block network transfers ... the processor cycles for the buffer copies could start to exceed the processor cycles executing instructions.
for whatever reason, the early mainframe tcp/ip product had somewhat
the "communication" design point ... consumed a 3090 processor
sustaining 44kbytes/sec. However, I added rfc1044 support to the base
mainframe tcp/ip product ... and in some tuning tests at Cray Research
between a 4341 and a Cray ... was able to get 4341 channel speed
sustained thruput using only a modest amount of the 4341 (on the order
of 1000 times improvement in the bytes moved per instruction
executed). misc. past posts mentioning having done rfc 1044 support
for the mainframe tcp/ip product (with nearly thousand-fold
improvement in the implementation)
https://www.garlic.com/~lynn/subnetwork.html#1044
for XTP/HSP ... misc. past posts mentioning xtp &/or hsp
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
evolving advanced networking objective was to come even closer to file i/o paradigm by not requiring any buffer copies at all ... directly getting data to/from the wire (use data-chaining, aka scatter/gather, I/O operations for header info) ... and trying to get pathlength under thousand instructions.
various of the disk division advanced products (that weren't allowed to announce/ship) for the distributed environment ... attempted to have the processor handle the equivalent in aggregate network channel traffic ... as the processor was capable of handling aggregate disk channel traffic (large tens of mbytes/sec and later hundreds of mbytes/sec).
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: August 7, 1944: today is the 65th Anniversary of the Birth of the Computer Newsgroups: alt.folklore.computers Date: Wed, 12 Aug 2009 09:11:58 -0400jmfbahciv <jmfbahciv@aol> writes:
we had previously done ha/cmp product
https://www.garlic.com/~lynn/subtopic.html#hacmp
... with all kinds of availability considerations ... when we were out
marketing, I had coined the terms disaster survivability and
geographic survivability (to differentiate from disaster recovery).
https://www.garlic.com/~lynn/submain.html#available
Looked at part "co-located" in central exchange (including 48v) ... but was somewhat apprehensive about it being very close to the fault line (they did explain about a lot of quake remediation efforts, it was a couple yrs after the quake).
we also had some big merchants getting into putting up webservers ... and effectively expecting the internet to be similar to telephone network (if they thot about it at all). one was looking at having a huge amount of business during sunday football halftime ... but their webserver had a single connection into single ISP ... an ISP that (at the time) regularly took down their routers on sunday for maintenance.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Disksize history question Newsgroups: alt.folklore.computers Date: Wed, 12 Aug 2009 09:35:03 -0400Freddy1X <freddy1X@indyX.netx> writes:
I've posted before that when SJR was originally deploying 6670s in deparmental areas around the bldg ... things were setup to load different colored paper in the alternate paper drawer and RSCS driver that printed out separator page ... would also select a random entry for a collection of items (there were two files with items, one with stuff that had been collected ... and another that had reformted version of the IBMJARGON glossary file).
I took those two files and redid the signature processing to randomly select from zippy, IBMJARGON, or the purely 6670s file ... and then select a random entry. I then had to modify the "random" routine that the zippy processing used. zippy file was less than 32kbytes ... and it just used the random (signed 16bit) value to select into random byte location in the file and then back up to start of an entry. The IBMJARGON and 66070s file were both well over 32kbytes ... so it would never select entries past 32kbytes.
I've mentioned getting into trouble with corporate auditors over them
finding definition of auditors on separator pages found on various 6670
around the bldg (they were doing after hr sweeps looking for classified
material left unsecured). we were already at odds with the corporate
auditors over a very public discussion about having games available
... recent reference
https://www.garlic.com/~lynn/2009l.html#16 August 7, 1944: today is the 65th Anniversary of the Birth of the Computer
past post mentioning auditor definition
https://www.garlic.com/~lynn/2002k.html#61 arrogance metrics (Benoits) was: general networking
https://www.garlic.com/~lynn/2004l.html#61 Shipwrecks
https://www.garlic.com/~lynn/2005f.html#51 1403 printers
https://www.garlic.com/~lynn/2007b.html#36 Special characters in passwords was Re: RACF - Password rules
https://www.garlic.com/~lynn/2007b.html#37 Special characters in passwords was Re: RACF - Password rules
https://www.garlic.com/~lynn/2008o.html#68 Blinkenlights
https://www.garlic.com/~lynn/2008o.html#69 Blinkenlights
https://www.garlic.com/~lynn/2008o.html#71 Why is sub-prime crisis of America called the sub-prime crisis?
https://www.garlic.com/~lynn/2008p.html#3 Blinkenlights
https://www.garlic.com/~lynn/2008p.html#8 Global Melt Down
https://www.garlic.com/~lynn/2008p.html#71 Password Rules
https://www.garlic.com/~lynn/2009e.html#73 Most 'leaders' do not 'lead' and the majority of 'managers' do not 'manage'. Why is this?
a few past posts mentioning zippy/yow
https://www.garlic.com/~lynn/93.html#5 360/67, was Re: IBM's Project F/S ?
https://www.garlic.com/~lynn/2001b.html#77 Inserting autom. random signature
https://www.garlic.com/~lynn/2001b.html#78 Inserting autom. random signature
https://www.garlic.com/~lynn/2002h.html#7 disk write caching (was: ibm icecube -- return of
https://www.garlic.com/~lynn/2004f.html#48 Random signatures
https://www.garlic.com/~lynn/2004k.html#48 Xah Lee's Unixism
https://www.garlic.com/~lynn/2005c.html#50 [Lit.] Buffer overruns
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Cyber attackers empty business accounts in minutes Date: 12 Aug, 2009 Blog: Financial Crime Risk, Fraud and Securityre:
Jim had con'ed me into interviewing for Chief Security Architect in redmond ... the interview went on over a period of weeks ... but we couldn't come to agreement. Part of the issue was we had done a temporary stint in redmond area ... and my wife had severe SAD ... and there was no way to fill the position remotely.
Past post mention celebrating Jim last year:
https://www.garlic.com/~lynn/2008p.html#27 Father Of Financial Dataprocessing
Over the years, I've worked with numerous of their people in financial standard groups and they are really competent people. They have a tough task ... walking line between mass market usability and security (as well as dealing with financial industry legacy infrastructures).
for the fun of it ... other recent (archived) threads/posts delving
into archeology of financial (transaction) dataprocessing
https://www.garlic.com/~lynn/2009.html#25 Wrong Instrument for Recurring Payments
https://www.garlic.com/~lynn/2009.html#39 repeat after me: RAID != backup
https://www.garlic.com/~lynn/2009.html#60 The 25 Most Dangerous Programming Errors
https://www.garlic.com/~lynn/2009.html#65 The 25 Most Dangerous Programming Errors
https://www.garlic.com/~lynn/2009.html#87 Cleaning Up Spaghetti Code vs. Getting Rid of It
https://www.garlic.com/~lynn/2009c.html#33 H5: Security Begins at the Application and Ends at the Mind
https://www.garlic.com/~lynn/2009c.html#34 Is the Relational Database Doomed?
https://www.garlic.com/~lynn/2009d.html#4 Why do IBMers think disks are 'Direct Access'?
https://www.garlic.com/~lynn/2009e.html#6 ATMs At Risk
https://www.garlic.com/~lynn/2009f.html#39 PIN Crackers Nab Holy Grail of Bank Card Security
https://www.garlic.com/~lynn/2009g.html#18 Top 10 Cybersecurity Threats for 2009, will they cause creation of highly-secure Corporate-wide Intranets?
https://www.garlic.com/~lynn/2009g.html#25 New standard for encrypting card data in the works; backers include Heartland
https://www.garlic.com/~lynn/2009g.html#63 New standard for encrypting card data in the works; backers include Heartland
https://www.garlic.com/~lynn/2009h.html#28 Computer virus strikes US Marshals, FBI affected
https://www.garlic.com/~lynn/2009i.html#22 My Vintage Dream PC
https://www.garlic.com/~lynn/2009i.html#71 Barclays ATMs hit by computer fault
https://www.garlic.com/~lynn/2009j.html#1 Is it possible to have an alternative payment system without riding on the Card Network platforms?
https://www.garlic.com/~lynn/2009j.html#48 Replace the current antiquated credit card system
https://www.garlic.com/~lynn/2009k.html#33 Trouble in PKI land
https://www.garlic.com/~lynn/2009k.html#35 Microsoft Is Among the First to Try out PayPal's New Payments API
https://www.garlic.com/~lynn/2009k.html#63 The satate of software
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Disksize history question Newsgroups: alt.folklore.computers Date: Thu, 13 Aug 2009 09:06:02 -0400Stan Barr <plan.b@dsl.pipex.com> writes:
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: another item related to ASCII vs. EBCDIC Newsgroups: alt.folklore.computers Date: Thu, 13 Aug 2009 09:43:06 -0400jeffj@panix.com (Jeff Jonas) writes:
clone controllers ... misc. posts
https://www.garlic.com/~lynn/submain.html#360pcm
begat future system project ... misc. posts
https://www.garlic.com/~lynn/submain.html#futuresys
distraction of future system (to completely replace 360/370 with something as different from 360/370 as 360 had been different from what had gone before) ... allowed 370 product pipelines to start to dry up. It is also contributed to allowing clone processors to gain foothold in the market.
I had made some number of disparaging remarks about the effort and continued to work on 370 stuff all during the period. when future system was killed there was mad rush to get stuff back into the 370 product pipeline ... which contributed to decision to pick up & release a lot of stuff I had been doing.
One of the things was dynamic adaptive resource manager ... which was
decided to release as separate kernel component. the rise of the clone
processors appeared to motivate the decision to start charging for
kernel software. my dynamic adaptive resource manager was selected as
the guinea pig ... and I had to spend a lot of time with business
planning people about policies for kernel software charging. Charging
for kernel software components increased until eventually all kernel
software was being charged for. in the interim period there was some
anarchy having to ship (uncharged for) kernels that ran with and w/o
various separately charged for kernel components. things settled down
after everything was charged for. misc. past posts mentioning dynamic
adaptive resource manager (frequently referred to as "fair share
scheduler" since the default policy was "fair share" resource
consumption)
https://www.garlic.com/~lynn/subtopic.html#fairshare
one of the consequences of the original unbundling was what to do about SE training. previously SEs would go through something of apprentice period as part of large SE times at customer site ... gettting lots of "hands-on" experience. After unbundling & charging for SE "services" ... couldn't justify having all these young SEs at the customer.
This begat HONE ... some number of CP67 datacenters around the country
to provide "hands-on" experience ... young SEs in the branch office
being able to get online access to running operating system in HONE
(CP67) virtual machine. After 370 was announced ... but before 370
virtual memory announcement ... there were operating systems that used
the few new instructions announced as part of original 370. The HONE
CP67 systems got special package of updates that would simulate the new
370 instructions ... allowing "370" operating systems to be run in CP67
virtual machine (on real 360/67). Misc. past posts mentioning HONE
https://www.garlic.com/~lynn/subtopic.html#hone
some recent posts commenting about unbundling:
https://www.garlic.com/~lynn/2009h.html#68 My Vintage Dream PC
https://www.garlic.com/~lynn/2009i.html#26 Why are z/OS people reluctant to use z/OS UNIX?
https://www.garlic.com/~lynn/2009i.html#37 Why are z/OS people reluctant to use z/OS UNIX?
https://www.garlic.com/~lynn/2009i.html#45 dynamic allocation
https://www.garlic.com/~lynn/2009j.html#18 Another one bites the dust
https://www.garlic.com/~lynn/2009j.html#67 DCSS
https://www.garlic.com/~lynn/2009j.html#68 DCSS addenda
https://www.garlic.com/~lynn/2009k.html#71 Hercules Question
https://www.garlic.com/~lynn/2009k.html#73 And, 40 years of IBM midrange
https://www.garlic.com/~lynn/2009k.html#76 And, 40 years of IBM midrange
https://www.garlic.com/~lynn/2009l.html#1 Poll results: your favorite IBM tool was IBM-issued laptops
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: another item related to ASCII vs. EBCDIC Newsgroups: alt.folklore.computers Date: Thu, 13 Aug 2009 10:11:04 -0400Larry__Weiss <lfw@airmail.net> writes:
When TWA bankrupt ... I lost all my miles .. and switched to Pan Am. Pan Am then decided to get out of the pacific route (kennedy, san fran, over the pacific & back) and concentrate on atlantic/europe business (sold off pacific routes and several 747s to united). I then switched to mostly American (sometimes United).
Over the years, I've periodically railed at "change of equipment" flts. These are flts that are listed as "direct" flt w/o "connection" ... but there is "change of equipment" ... you have to get off the plane and take another plane (it looks like a duck to me).
The explanation was that in the OAG and on the screens, "direct" flts are listed before "connecting" flts ... when looking to get between two locations. "change of equipment" might have a plane flying with multiple different flt numbers. people on the plane have to get off at some intermediate airport and change planes (since the flt no, would appear like a direct flt and appear at the top of the listing, flts at the top of the list were more frequently selected than flts lower in the list).
The 1st case I've found of this was a TWA plane that parked in San Jose overnight ... and was listed as both a early morning direct flt to seattle and direct flt to kennedy. The plane did short hop from san jose to san fran ... and passengers going to kennedy would deplane in san fran and get on different plane ("change of equipment" ... not a "connecting flight").
Later when I looked at redoing "routes" for one of the large reservation
systems ("routes" is the application that when somebody asks for getting
from origin airport to destination airport, displays the possible flts
and/or connecting flts) ... I was provided a copy of the full OAG
database (all scheduled commercial flts in the world). I found what
appeared to be worse case; some equipment with half dozen flt numbers (a
particular carrier with different flt numbers all departing Honolulu at
exact same time and all arriving LAX the same time). misc. past posts
mentioning redoing "routes":
https://www.garlic.com/~lynn/2000.html#61 64 bit X86 ugliness (Re: Williamette trace cache (Re: First view of Willamette))
https://www.garlic.com/~lynn/2002i.html#40 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2003o.html#17 Rationale for Supercomputers
https://www.garlic.com/~lynn/2003o.html#38 When nerds were nerds
https://www.garlic.com/~lynn/2004o.html#23 Demo: Things in Hierarchies (w/o RM/SQL)
https://www.garlic.com/~lynn/2005k.html#37 The 8008 (was: Blinky lights WAS: The SR-71 Blackbird was designed ENTIRELYwith slide rules)
https://www.garlic.com/~lynn/2005o.html#24 is a computer like an airport?
https://www.garlic.com/~lynn/2005p.html#8 EBCDIC to 6-bit and back
https://www.garlic.com/~lynn/2006o.html#18 RAMAC 305(?)
https://www.garlic.com/~lynn/2006q.html#22 3 value logic. Why is SQL so special?
https://www.garlic.com/~lynn/2007g.html#22 Bidirectional Binary Self-Joins
https://www.garlic.com/~lynn/2007p.html#34 what does xp do when system is copying
https://www.garlic.com/~lynn/2007p.html#45 64 gig memory
https://www.garlic.com/~lynn/2007p.html#67 what does xp do when system is copying
https://www.garlic.com/~lynn/2008h.html#61 Up, Up, ... and Gone?
https://www.garlic.com/~lynn/2008p.html#39 Automation is still not accepted to streamline the business processes... why organizations are not accepting newer technologies?
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: August 7, 1944: today is the 65th Anniversary of the Birth of the Computer Newsgroups: alt.folklore.computers Date: Thu, 13 Aug 2009 16:38:21 -0400Michael Wojcik <mwojcik@newsguy.com> writes:
also pushing that the NSFNET backbone to have T1 links
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
and some old email
https://www.garlic.com/~lynn/lhwemail.html#nsfnet
but because of some internal politics ... were weren't allowed to bid on NSFNET backbone (even with director of NSF writing letter to corporation asking for our participation ... even making statements that what we already had running was at least five yrs ahead of all bid submissions to build something new for NSFNET backbone). The winning bid ... didn't even do T1 links ... they just had 440kbit links ... but possibly to meet the letter of the bid ... they put in T1 trunks and used telco multiplexors to run multiple 440kbit links over the T1 turnks (we made snide comments that the T1 trunk going into telco infrastructure was likely put thru other telco multiplexors ... possibly at some point even running over some T5 trunk ... which could allow them to claim a T5 network).
in the mid-80s, we also got into trouble with the corporate communication product group over the T1 issue (likely because their standard product only supported up to 56kbit links). Eventually they produced a customer survey that purported to show customers wouldn't be ready for T1 links until the mid-90s.
Their standard product supported something called "fat pipes" which allowed multiple, parallel (56kbit) links to be logically treated as a single link. Their methodology was to survey customer use of "fat pipes" and the number of customers with 2 link fat pipe, 3 link fat pipe, etc. They found that the number dropped to zero at 6 link fat pipe ... and used that to justify that customers weren't really ready for higher speed links (and therefor didn't need a T1 product for another decade).
What their methodology failed to recognize was that telco tariff for T1 link was typically about the same as five or six 56kbit links ... and so customers that got to requirement for 300kbit or so, between two locations ... installed a full T1 and operated it with some other vendor's equipment. Trivial survey of mainframe customers turned up 200 with T1 links (that were driven by some other vendor's equipment).
In that time-frame, hsdt project was getting some equipment built to spec ... one friday before I was to leave for trip to the other side of the pacific ... somebody in the communication group distributed an announcement for new online computer discussion group on the subject of "high-speed" communication ... that included the following definition:
low-speed <9.6kbits medium-speed 19.2kbits high-speed 56kbits very high-speed 1.5mbitsthe following Monday morning, on the wall of a conference room on the other side of the pacific was:
low-speed <20mbits medium-speed 100mbits high-speed 200-300mbits very high-speed >600mbits--
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: another item related to ASCII vs. EBCDIC Newsgroups: alt.folklore.computers Date: Thu, 13 Aug 2009 18:14:27 -0400Walter Bushell <proto@panix.com> writes:
the flt you come in on has multiple flt numbers ... you get off the plane at airport and get on another plane ... which has acquired one of the flt numbers from the plane you had just left. the plane you got on ... may have arrived from some other airport, with multiple flt numbers. It may retain some of the flt numbers (that it had arrived with) in addition to acquiring possibly one or more flt new numbers when it departs.
the flt numbers no longer have one to one relationship with particular piece of equipment flying from one location to the next ... becoming something of a virtual abstraction overlayed on the top of real equipment that are moving between airports.
it was explicitly stated that major reason for the whole thing was to get what would nominally be a connecting flight ... listed in the 2nd section of screen/OAG ... into the 1st section that listed the "direct" flts.
I was asked to come in to look at redoing "routes" ... and some other sections of major res. system. I was given rundown on existing implementation ... including ten things that they would like to be able to do ... but wasn't possible in the existing implementation. One of the existing "routes" limitations was finding route with more than three connects (which require human agent to stitch together more complex trip).
there was some choice of these "virtual flt numbers" ... that was work-around to limitation to 3-connect limitation (i.e. routes could stitch complex trip together with multiple flts with the aid of the virtual flt numbers and change of equipment).
As mentioned, I was provided a full OAG ... had something over 4000 airports world-wide with commercial schedule flts., 480+k flts segments (although possibly fewer physical takeoff/landings ... since some equipment was taking off & landing with multiple flt numbers).
I came back after two months with a complete rewrite of "routes" (and implemented all ten impossible things) ... one of the paces they put it thru was specifying really obscure origin and destination airports ... requiring much larger than three connects. One origin was some obscure airport in kansas and obscure destination airport in georgia. It was five connects and more than 24hrs elapsed time (no, it isn't likely the georgia you are thinking of).
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Old-school programming techniques you probably don't miss Newsgroups: alt.folklore.computers Date: Fri, 14 Aug 2009 08:03:07 -0400Old-school programming techniques you probably don't miss
from above:
Sorting algorithms and other hand-coded fiddly stuff
Creating your own graphical user interfaces
Go To and spaghetti code ...
Manual multithreading and multitasking
Self-modifying code
Memory management
Punch cards and other early development environments
Pointer math and date conversions
Hungarian notation and other language limitations
Doing strange things to make code run faster
Being patient
... snip ...
in 80s, i got email from one of my kids at school ... mentioning that interactive response slow down would be measured in minutes ... during prime-time use.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: August 7, 1944: today is the 65th Anniversary of the Birth of the Computer Newsgroups: alt.folklore.computers Date: Fri, 14 Aug 2009 11:13:25 -0400hancock4 writes:
and old posts with picture of (3101) glass teletype (that replaced
cdi miniterm at home):
https://www.garlic.com/~lynn/2008m.html#51 Baudot code direct to computers?
I've used similar argument regarding the uptake of relational DBMS in
the 80s as skills to support the "60s DBMS" became more expensive and
scarcer (although there are still some really high-end DBMS applications
still going strong on 60s DBMS mainframe technology ... the cost/benefit
trade-off can still support the scarce/expensive care&feeding by
humans). misc. past posts mentioning original relational/SQL
implementation
https://www.garlic.com/~lynn/submain.html#systemr
The 3101 ... while 1200 baud dialup ... had something of split personality ... sometimes being used as scrolling paper teletype emulation ... and sometimes trying to emulate 3270 terminal being used for (emerging) "fullscreen" applications.
It is also the argument I've used for a lot of the recente uptake in virtualization. During the 90s ... it became popular to dedicate single box to per application ... because the care&feeding of multiple applications on the same box was more expensive (& scarce) than the cost of the additional hardware. the current paradigm is a lot of these boxes are running at ten percent or less utilization. Virtualization is a paradigm to merge lots of these different appications onto single box (frequently getting 10:1 reduction in actual boxes) ... w/o needed as much human (expensive/scare) care&feeding (as the traditional operating system paradigm required to support multiple concurrent applications).
It is also the argument I've used for a lot of the early uptake of
IBM/PCs in the 80s ... for about the same price of 3270 terminal
& in the same desktop footprint, businesses could get a IBM/PC.
Businesses that justification for tens of thousands of 3270 terminals
found it relative brain-dead task to change the business justification
from 3270 terminal to IBM/PC (getting some amount of local computing
capability as essentially a freebee bonus). misc. past posts mentioning
the whole terminal emulation era/paradigm
https://www.garlic.com/~lynn/subnetwork.html#emulation
when I was undergraduate ... there was a (administration university business) 360 cobol program than had been converted from 709 cobol ... what had apparently gone thru some iterations back to 407 plug-board simulation ... to real 407. I wasn't aware of it, until one day it ended with print-out of simulated 407 sense-switch settings (running on 360/67 in non-virtual os/360 360/65 mode) ... that the person baby-sitting the application, had never seen before.
All processing in the datacenter came to a halt while they tried to find somebody that understood what it might mean. After a couple hrs, they finally decided to run it again and see if the results were the same. The application was eventually rerun ... and then the datacenter normal batch processing resumed.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: another item related to ASCII vs. EBCDIC Newsgroups: alt.folklore.computers Date: Fri, 14 Aug 2009 11:23:49 -0400Charles Richmond <frizzle@tx.rr.com> writes:
in the past decade or two ... there have been some periodic comments about sorry state of "western" mathematics ... getting sidetracked into computer use ... and straying from fundamental theory.
some number of industries, looking for fundamental theory skills, were recruiting mathematicians from cultures that had much lower use computer exposure. one such was the chip industry ... that had gone thru a period of (computerized) brute-force chip design validation ... but as chips complexity & size grew ... some new theoritical approach had to be invented to try and address chip design validation.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: IBM launches integrated mainframe packages for payments, data warehousing and SOA Date: 14 Aug, 2009 Blog: Mainframe Experts NetworkIBM launches integrated mainframe packages for payments, data warehousing and SOA
from above:
IBM on Friday announced a lineup of integrated mainframe, software and
services packages designed for specific tasks like data warehousing,
service oriented architecture and transactions.
... snip ...
We had been called in to consult with small client/server startup that
wanted to do payment transactions on their server ... they had also
invented this technology called SSL they wanted to use; the result is
now sometimes referred to as "electronic commerce". Part of the
deployment was something called a "payment gateway" ... which we
periodically refer to as the original SOA. misc. past posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
For older archeology reference to payments and financial (transaction)
dataprocessing:
https://www.garlic.com/~lynn/2008p.html#27 Father of Financial Dataprocessing
... also referenced in this recent thread in
(linkedin) financial security group discussion
https://www.garlic.com/~lynn/2009l.html#20
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: nostalgia for download limits Newsgroups: alt.folklore.computers Date: Fri, 14 Aug 2009 14:31:12 -0400Michael Wojcik <mwojcik@newsguy.com> writes:
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: August 7, 1944: today is the 65th Anniversary of the Birth of the Computer Newsgroups: alt.folklore.computers Date: Fri, 14 Aug 2009 17:50:52 -0400scott@slp53.sl.home (Scott Lurndal) writes:
and that MAC/LAN violates the OSI model since the interface sits
somewhere in the middle of the networking layer (below the
network/transport interface ... but above the network/datalink
interface ... i.e. LAN provides some, but not all of the network layer
function).
https://en.wikipedia.org/wiki/OSI_model
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Old-school programming techniques you probably don't miss Newsgroups: alt.folklore.computers Date: Sat, 15 Aug 2009 08:49:47 -0400Peter Flass <Peter_Flass@Yahoo.com> writes:
one of the things i found was that there were cases of highly optimized, complex, well written assembler with conditional test & branches ... could get much less clear when translated into some of these other conditional execuation infrastructures. part was code that actually used 4-state branch of 360 conditional .... i.e. condition setting in 360 is 2-bits, four possible states ... not simple 1-bit/binary. There was cases of highly optimized kernel code that took advantage of four-state logic (with branches) ... which got a lot less clear when trying to translate into 2-state/binary if/then/else structures.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Disksize history question Newsgroups: alt.folklore.computers Date: Sat, 15 Aug 2009 09:34:02 -0400Walter Bushell <proto@panix.com> writes:
HP refunds 520$ for unused software
http://ernstfamily.ch/jonathan/2009/03/hp-refunds-520-of-software/
older article:
Group of PC users wants Windows refund
http://news.cnet.com/Group-of-PC-users-wants-Windows-refund/2100-1040_3-220452.html
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: August 7, 1944: today is the 65th Anniversary of the Birth of the Computer Newsgroups: alt.folklore.computers Date: Sun, 16 Aug 2009 09:10:54 -0400jmfbahciv <jmfbahciv@aol> writes:
each line then came into port on telecommunication controller. When I had added ascii/tty terminal support to cp67 ... I extended the cp67 dynamically recognize terminal type support to include tty/ascii ... dynamically re-associating type of line-scanner with specific port on 2702 (using 2702 SAD command). the 2702 hardware implementation did a short-cut ... hard-wiring line-speed/oscillator to each port ... the dynamic stuff would work with leased/directly-wired lines ... but wouldn't actually work when pool of lines/ports included different type terminals that operated at different speeds (i.e. requires different "dial-in" number in the telco box for each line-speed ... each configured with different pool of numbers/lines/ports ... instead of single number & single pool of numbers/lines/ports).
this was some part of the motivation of the univ. to start its own clone
controller project ... to implement dynamic line-speed for each port
... as well as (2702 emulated) dynamic line-scanner type for each port.
recent post/thread:
https://www.garlic.com/~lynn/2009j.html#60 A Complete History Of Mainframe Computing
past posts ... also mentions that the clone controller project with
interdata/3 morphed into clone controller product sold by interdata
(later perkin/elmer when PE bought interdata)
https://www.garlic.com/~lynn/submain.html#360pcm
related post mentioning that UofMich did something similar
for their 360/67 online MTS system (but with PDP8):
https://www.garlic.com/~lynn/2009k.html#70 An inComplete History Of Mainframe Computing
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Old-school programming techniques you probably don't miss Newsgroups: alt.folklore.computers Date: Sun, 16 Aug 2009 09:22:05 -0400Jorgen Grahn <grahn+nntp@snipabacken.se> writes:
I've had lots of (security-related) skirmishes over code that was "designed(?)" to purposefully be convoluted and complex ... possibly under the assumption that more complex is more secure & valuable and therefor can carry much larger price.
however, vulnerabilities tend to be proportional to complexity ... which frequently results in complex implementations much less secure (their counter-argument is that the customer's humans involved just aren't adequently educated & trained ... opportunity pile-on with large training and consulting fees).
I've frequently used the line that KISS is usually much harder to do
than "complex" (since it requires some actual, in-depth understanding)
and significantly more secure. random past posts mentioning KISS:
https://www.garlic.com/~lynn/aadsm2.htm#mcomfort Human Nature
https://www.garlic.com/~lynn/aadsm3.htm#kiss1 KISS for PKIX. (Was: RE: ASN.1 vs XML (used to be RE: I-D ACTION :draft-ietf-pkix-scvp- 00.txt))
https://www.garlic.com/~lynn/aadsm3.htm#kiss2 Common misconceptions, was Re: KISS for PKIX. (Was: RE: ASN.1 vs XML (used to be RE: I-D ACTION :draft-ietf-pkix-scvp-00.txt))
https://www.garlic.com/~lynn/aadsm3.htm#kiss3 KISS for PKIX. (Was: RE: ASN.1 vs XML (used to be RE: I-D ACTION :draft-ietf-pkix-scvp- 00.txt))
https://www.garlic.com/~lynn/aadsm3.htm#kiss4 KISS for PKIX. (Was: RE: ASN.1 vs XML (used to be RE: I-D ACTION :draft-ietf-pkix-scvp- 00.txt))
https://www.garlic.com/~lynn/aadsm3.htm#kiss5 Common misconceptions, was Re: KISS for PKIX. (Was: RE: ASN.1 vs XML (used to be RE: I-D ACTION :draft-ietf-pkix-scvp- 00.txt))
https://www.garlic.com/~lynn/aadsm3.htm#kiss6 KISS for PKIX. (Was: RE: ASN.1 vs XML (used to be RE: I-D ACTION :draft-ietf-pkix-scvp- 00.txt))
https://www.garlic.com/~lynn/aadsm3.htm#kiss7 KISS for PKIX. (Was: RE: ASN.1 vs XML (used to be RE: I-D ACTION :draft-ietf-pkix-scvp- 00.txt))
https://www.garlic.com/~lynn/aadsm3.htm#kiss8 KISS for PKIX
https://www.garlic.com/~lynn/aadsm3.htm#kiss9 KISS for PKIX .... password/digital signature
https://www.garlic.com/~lynn/aadsm3.htm#kiss10 KISS for PKIX. (authentication/authorization seperation)
https://www.garlic.com/~lynn/aadsm5.htm#liex509 Lie in X.BlaBla...
https://www.garlic.com/~lynn/aadsm7.htm#3dsecure 3D Secure Vulnerabilities?
https://www.garlic.com/~lynn/aadsm8.htm#softpki10 Software for PKI
https://www.garlic.com/~lynn/aadsm10.htm#boyd AN AGILITY-BASED OODA MODEL FOR THE e-COMMERCE/e-BUSINESS ENTERPRISE
https://www.garlic.com/~lynn/aadsm11.htm#10 Federated Identity Management: Sorting out the possibilities
https://www.garlic.com/~lynn/aadsm11.htm#30 Proposal: A replacement for 3D Secure
https://www.garlic.com/~lynn/aadsm12.htm#19 TCPA not virtualizable during ownership change (Re: Overcoming the potential downside of TCPA)
https://www.garlic.com/~lynn/aadsm12.htm#54 TTPs & AADS Was: First Data Unit Says It's Untangling Authentication
https://www.garlic.com/~lynn/aadsm13.htm#16 A challenge
https://www.garlic.com/~lynn/aadsm13.htm#20 surrogate/agent addenda (long)
https://www.garlic.com/~lynn/aadsm15.htm#19 Simple SSL/TLS - Some Questions
https://www.garlic.com/~lynn/aadsm15.htm#20 Simple SSL/TLS - Some Questions
https://www.garlic.com/~lynn/aadsm15.htm#21 Simple SSL/TLS - Some Questions
https://www.garlic.com/~lynn/aadsm15.htm#39 FAQ: e-Signatures and Payments
https://www.garlic.com/~lynn/aadsm15.htm#40 FAQ: e-Signatures and Payments
https://www.garlic.com/~lynn/aadsm16.htm#1 FAQ: e-Signatures and Payments
https://www.garlic.com/~lynn/aadsm16.htm#10 Difference between TCPA-Hardware and a smart card (was: example:secure computing kernel needed)
https://www.garlic.com/~lynn/aadsm16.htm#12 Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)
https://www.garlic.com/~lynn/aadsm17.htm#0 Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)<
https://www.garlic.com/~lynn/aadsm17.htm#41 Yahoo releases internet standard draft for using DNS as public key server
https://www.garlic.com/~lynn/aadsm17.htm#60 Using crypto against Phishing, Spoofing and Spamming
https://www.garlic.com/~lynn/aadsm19.htm#27 Citibank discloses private information to improve security
https://www.garlic.com/~lynn/aadsm21.htm#1 Is there any future for smartcards?
https://www.garlic.com/~lynn/aadsm21.htm#11 Payment Tokens
https://www.garlic.com/~lynn/aadsm21.htm#26 X.509 / PKI, PGP, and IBE Secure Email Technologies
https://www.garlic.com/~lynn/aadsm24.htm#15 Apple to help Microsoft with "security neutrality"?
https://www.garlic.com/~lynn/aadsm24.htm#52 Crypto to defend chip IP: snake oil or good idea?
https://www.garlic.com/~lynn/aadsm27.htm#23 Identity resurges as a debate topic
https://www.garlic.com/~lynn/aadsm27.htm#54 Security can only be message-based?
https://www.garlic.com/~lynn/aadsm27.htm#64 How to crack RSA
https://www.garlic.com/~lynn/aadsm28.htm#0 2007: year in review
https://www.garlic.com/~lynn/aadsm28.htm#11 Death of antivirus software imminent
https://www.garlic.com/~lynn/99.html#228 Attacks on a PKI
https://www.garlic.com/~lynn/2001.html#18 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001l.html#1 Why is UNIX semi-immune to viral infection?
https://www.garlic.com/~lynn/2001l.html#3 SUNW at $8 good buy?
https://www.garlic.com/~lynn/2002b.html#22 Infiniband's impact was Re: Intel's 64-bit strategy
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2002b.html#59 Computer Naming Conventions
https://www.garlic.com/~lynn/2002c.html#15 Opinion on smartcard security requested
https://www.garlic.com/~lynn/2002d.html#0 VAX, M68K complex instructions (was Re: Did Intel Bite Off MoreThan It Can Chew?)
https://www.garlic.com/~lynn/2002d.html#1 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002e.html#26 Crazy idea: has it been done?
https://www.garlic.com/~lynn/2002e.html#29 Crazy idea: has it been done?
https://www.garlic.com/~lynn/2002i.html#62 subjective Q. - what's the most secure OS?
https://www.garlic.com/~lynn/2002k.html#11 Serious vulnerablity in several common SSL implementations?
https://www.garlic.com/~lynn/2002k.html#43 how to build tamper-proof unix server?
https://www.garlic.com/~lynn/2002k.html#44 how to build tamper-proof unix server?
https://www.garlic.com/~lynn/2002m.html#20 A new e-commerce security proposal
https://www.garlic.com/~lynn/2002m.html#27 Root certificate definition
https://www.garlic.com/~lynn/2002p.html#23 Cost of computing in 1958?
https://www.garlic.com/~lynn/2003.html#60 MIDAS
https://www.garlic.com/~lynn/2003b.html#45 hyperblock drift, was filesystem structure (long warning)
https://www.garlic.com/~lynn/2003b.html#46 internal network drift (was filesystem structure)
https://www.garlic.com/~lynn/2003c.html#66 FBA suggestion was Re: "average" DASD Blocksize
https://www.garlic.com/~lynn/2003d.html#14 OT: Attaining Perfection
https://www.garlic.com/~lynn/2003h.html#42 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs
https://www.garlic.com/~lynn/2003m.html#33 MAD Programming Language
https://www.garlic.com/~lynn/2003n.html#37 Cray to commercialize Red Storm
https://www.garlic.com/~lynn/2004c.html#26 Moribund TSO/E
https://www.garlic.com/~lynn/2004e.html#26 The attack of the killer mainframes
https://www.garlic.com/~lynn/2004e.html#30 The attack of the killer mainframes
https://www.garlic.com/~lynn/2004f.html#58 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004f.html#60 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#24 |d|i|g|i|t|a|l| questions
https://www.garlic.com/~lynn/2004h.html#51 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2004q.html#50 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#10 The Soul of Barb's New Machine
https://www.garlic.com/~lynn/2005.html#12 The Soul of Barb's New Machine
https://www.garlic.com/~lynn/2005c.html#22 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005i.html#19 Improving Authentication on the Internet
https://www.garlic.com/~lynn/2005l.html#18 The Worth of Verisign's Brand
https://www.garlic.com/~lynn/2005p.html#43 Security of Secret Algorithm encruption
https://www.garlic.com/~lynn/2005q.html#24 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005q.html#34 How To Abandon Microsoft
https://www.garlic.com/~lynn/2005q.html#40 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2006c.html#8 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006k.html#38 PDP-1
https://www.garlic.com/~lynn/2006m.html#46 Musings on a holiday weekend
https://www.garlic.com/~lynn/2006n.html#22 sorting was: The System/360 Model 20 Wasn't As Bad As All That
https://www.garlic.com/~lynn/2006p.html#11 What part of z/OS is the OS?
https://www.garlic.com/~lynn/2007b.html#10 Special characters in passwords was Re: RACF - Password rules
https://www.garlic.com/~lynn/2007d.html#70 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007h.html#29 sizeof() was: The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007h.html#30 sizeof() was: The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007i.html#5 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007i.html#7 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007i.html#25 Latest Principles of Operation
https://www.garlic.com/~lynn/2007i.html#26 Latest Principles of Operation
https://www.garlic.com/~lynn/2007l.html#12 My Dream PC -- Chip-Based
https://www.garlic.com/~lynn/2007l.html#13 My Dream PC -- Chip-Based
https://www.garlic.com/~lynn/2008c.html#61 more on (the new 40+ yr old) virtualization
https://www.garlic.com/~lynn/2008e.html#52 Any benefit to programming a RISC processor by hand?
https://www.garlic.com/~lynn/2008h.html#47 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008h.html#97 Is virtualization diminishing the importance of OS?
https://www.garlic.com/~lynn/2008i.html#18 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008i.html#75 Outsourcing dilemma or debacle, you decide
https://www.garlic.com/~lynn/2008j.html#64 lack of information accuracy
https://www.garlic.com/~lynn/2008k.html#55 recent mentions of 40+ yr old technology
https://www.garlic.com/~lynn/2008k.html#74 Top 10 vulnerabilities for service orientated architecture?
https://www.garlic.com/~lynn/2008l.html#21 recent mentions of 40+ yr old technology
https://www.garlic.com/~lynn/2008p.html#14 Can Smart Cards Reduce Payments Fraud and Identity Theft?
https://www.garlic.com/~lynn/2008p.html#19 Can Smart Cards Reduce Payments Fraud and Identity Theft?
https://www.garlic.com/~lynn/2008p.html#55 Can Smart Cards Reduce Payments Fraud and Identity Theft?
https://www.garlic.com/~lynn/2008p.html#65 Barbless
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Old-school programming techniques you probably don't miss Newsgroups: alt.folklore.computers Date: Sun, 16 Aug 2009 09:28:04 -0400Peter Flass <Peter_Flass@Yahoo.com> writes:
sorry ... i didn't mean to imply it was pascal ... just that it was somewhat similar to pascal.
I didn't actually run into pascal until almost decade later. it was
doing some stuff with the tools group in los gatos vlsi lab ... and they
were using metaware/deremer TWS stuff to develop mainframe pascal for
implementing vlsi tools ... old reference
https://www.garlic.com/~lynn/2004d.html#71 What terminology reflects the "first" computer language ?
it eventually evolved into pascal product released to customers ... also migrated to other platforms (risc).
i've mentioned that it was also used to implement original mainframe
tcp/ip product ... recent reference:
https://www.garlic.com/~lynn/2009l.html#17 SNA: conflicting opinions
which strays into implementing rfc 1044 for mainframe tcp/ip product
https://www.garlic.com/~lynn/subnetwork.html#1044
I also used it for implementing some number of other (mainframe)
applications ... old thread:
https://www.garlic.com/~lynn/2000b.html#43 Migrating pages from a paging device (was Re: removal of paging device)
the above discusses re-implementing some vm370 kernel stuff in pascal
... as part of reworking infrastructure to enable RSCS running 100
times faster ... objective enable RSCS driving multiple HSDT links
(full-duplex T1 and higher speed) at sustained media speeds (each
full-duplex T1 about 150kbytes/sec in each direction, HSDT node
requirment well over several mbytes/sec sustained). other posts
mentioning hsdt
https://www.garlic.com/~lynn/subnetwork.html#hsdt
when palo alto originally starting project to do BSD to mainframe
port, I talked them into using Metaware's C compiler (one of the two
people responsible for original LSG vlsi lab pascal was then at
Metaware, the other was vp of software at MIPS). when that BSD was
retargeted to PC/RT (risc) ... they continued with Metaware's C
compiler. a couple past references:
https://www.garlic.com/~lynn/2002n.html#66 Mainframe Spreadsheets - 1980's History
https://www.garlic.com/~lynn/2004n.html#30 First single chip 32-bit microprocessor
https://www.garlic.com/~lynn/2005s.html#33 Power5 and Cell, new issue of IBM Journal of R&D
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Disksize history question Newsgroups: alt.folklore.computers Date: Sun, 16 Aug 2009 09:55:31 -0400Walter Bushell <proto@panix.com> writes:
there have been lots of articles about how the finance lobby has spent more than $5B on congress over the past decade ... significantly faciliting the current economic crisis ... but there have also been a few footnotes that the healcare lobby this past session, has actually outspent the finance lobby.
there have been past references that the body with the highest percentage of convicted felons is congress.
note that in the late 90s with the flurry of outsourcing activity for y2k remediation (of computer applications) ... especially in the finance industry ... there were cases of organized crime setting up front organizations that low-bid y2k remediation contracts (since their objective was to install backdoors in the financial software).
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM to slash MF Prices? Newsgroups: bit.listserv.ibm-main Date: Mon, 17 Aug 2009 10:10:03 -0400couple more ...
IBM :: IBM Announces System z Mainframe Offerings to Help Customers
Deploy New Workloads
http://sev.prnewswire.com/computer-electronics/20090814/NY6153514082009-1.html
IBM halves mainframe Linux engine prices
http://www.theregister.co.uk/2009/08/17/ibm_mainframe_linux_cuts/
from above ...
The IBM mainframe may not have a lot of direct competition when it comes
to z/OS-based batch and transactional work, but the story is different
when it comes to Linux. There's plenty of competition among Linux
platforms, and Big Blue can't ignore the pressure that Moore's Law
brings to bear.
... snip ...
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Network Rivalry Sparks 10-Year Quadrupling of PIN-Debit Pricing Date: 17 Aug, 2009 Blog: Payment Systems NeworkNetwork Rivalry Sparks 10-Year Quadrupling of PIN-Debit Pricing
from above:
Now, data compiled by the Federal Reserve Bank of Kansas City show
why: the cost of accepting PIN-debit cards rose 305% between 1996 and
2007.
... snip ...
Merchants have been somewhat conditioned that interchange fees are proportional to risk/fraud. I've seen studies that signature-debit fraud is 15 times pin-debit fraud ... which can be used to justify significant higher interchange fee for signature-debit (fraud).
If PIN-debit fraud has increased by 300% over the past ten yrs ... there could be a case for 300% increase in PIN-debit interchange fees ... as opposed to possible scenario that distribution of total transactions at institution has shifted towards lower-fraud, PIN-debit ... which would require increase in PIN-debit fees ... simply as part of preserving total institution revenue (i.e. past studies claiming that nearly 40% of US financial institution avg bottom line comes from payment related fees and charges).
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Old-school programming techniques you probably don't miss Newsgroups: alt.folklore.computers Date: Mon, 17 Aug 2009 12:50:16 -0400Michael Wojcik <mwojcik@newsguy.com> writes:
this was somewhat in reaction to leakage of hardcopy 370 virtual memory document (before virtual memory had been announced). it was a "copy" ... in the wake of that ... all internal copy machines got little number glued to the underside of the glass ... so the number shows up on all copies (being able to trace back to the machine used).
in any case, i had weekend time in machine room with one such system ... and some people made the off-hand comment that even if Lynn was left alone in the machine room all weekend, he still wouldn't be able to access the information. As I've mentioned before, it was one of the few times I rose to the bait. I first had to disable all outside access to the machine ... and then from the front panel, flipped a bit in a kernel branch instruction ... so whatever was typed ... was taken as a valid password (cutting the machine off from outside access took longer than the actual breakin).
recent reference also mentioning above:
https://www.garlic.com/~lynn/2009k.html#5 Moving to the Net: Encrypted Execution for User Code on a Hosting Site
for a long-time, "self-modifying" code capability has been blamed for (at least) 30% slow-down in 370 instruction execution ... i.e. (until speculation execution and ability to undo executed instructions) ... issue was that pipeline couldn't decode and execute instruction if it had to constantly check to see if an instruction might modify the immediately following instruction. as machine implementations got more powerful and included things like speculative execution ... self-modifying instructions could be treated similarly to branch taken or not-taken.
i've periodically mentioned that a lot of original 801/risc was as much about eliminating hardware features that resulted in execution slow-down ... as much about simplified architecture (that could be implemented in single chip) ... as swinging the pendelum in the exact opposite direction from that taken by the failed future system effort.
separate non-coherent I&D caches was an aspect ... stores went
into (store-in) "D" cache ... but wouldn't affect same storage
location already in the "I" cache. This created some hack for
"loaders" ... which might be required to treat executable instructions
as data ... aka special operations that forced any modified data (in
specific address ranges) from the "D" cache back to main memory
... and invalidate corresponding (address ranges) data from the I
cache. misc. past posts mentioning 801, risc, romp, rios, power,
power/pc, somerset, fort knox, etc
https://www.garlic.com/~lynn/subtopic.html#801
the lack of cache coherency extended to being a roadblock to
implementing 801 SMP. one of the reason we did HA/CMP "cluster" scale-up
... was that 801 lacked cache coherency for standard SMP (scale-up)
implementation. misc. past posts mentioning ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp
old email discussing ha/cmp cluster scale-up
https://www.garlic.com/~lynn/lhwemail.html#medusa
misc. past posts mentioning smp &/or compare&swap instruction
https://www.garlic.com/~lynn/subtopic.html#smp
recent post mentioning rios (rs/6000) compare&swap simulation for purely
uniprocessor implementation (i.e. original justification forcompare&swap
for 370 involved showing how compare&swap would be used on
high-thruput multithreaded applications ... even running on
non-multiprocessor machine):
https://www.garlic.com/~lynn/2009k.html#66 A Complete History Of Mainframe Computing
past posts in this thread:
https://www.garlic.com/~lynn/2009g.html#13 Old-school programming techniques you probably don't miss
https://www.garlic.com/~lynn/2009g.html#30 Old-school programming techniques you probably don't miss
https://www.garlic.com/~lynn/2009g.html#32 Old-school programming techniques you probably don't miss
https://www.garlic.com/~lynn/2009g.html#35 Old-school programming techniques you probably don't miss
https://www.garlic.com/~lynn/2009g.html#36 Old-school programming techniques you probably don't miss
https://www.garlic.com/~lynn/2009g.html#43 Old-school programming techniques you probably don't miss
https://www.garlic.com/~lynn/2009g.html#49 Old-school programming techniques you probably don't miss
https://www.garlic.com/~lynn/2009g.html#51 Old-school programming techniques you probably don't miss
https://www.garlic.com/~lynn/2009g.html#55 Old-school programming techniques you probably don't miss
https://www.garlic.com/~lynn/2009g.html#56 Old-school programming techniques you probably don't miss
https://www.garlic.com/~lynn/2009g.html#58 Old-school programming techniques you probably don't miss
https://www.garlic.com/~lynn/2009g.html#60 Old-school programming techniques you probably don't miss
https://www.garlic.com/~lynn/2009l.html#26 Old-school programming techniques you probably don't miss
https://www.garlic.com/~lynn/2009l.html#32 Old-school programming techniques you probably don't miss
https://www.garlic.com/~lynn/2009l.html#35 Old-school programming techniques you probably don't miss
https://www.garlic.com/~lynn/2009l.html#36 Old-school programming techniques you probably don't miss
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: another item related to ASCII vs. EBCDIC Newsgroups: alt.folklore.computers Date: Mon, 17 Aug 2009 18:41:26 -0400Rich Alderson <news@alderson.users.panix.com> writes:
there was difficulty of getting 3270 terminal justifications into fall
business plan (past comments mentioning showing 3yr amortized capital
depreciation was approx same/month as phone on desk). somewhat
referenced in "MIP ENVY" ... some old email referencing mipenvy:
https://www.garlic.com/~lynn/2007.html#email801006
https://www.garlic.com/~lynn/2007.html#email801016
in this post
https://www.garlic.com/~lynn/2007.html#1 The Elements of Programming Style
and copy
https://www.garlic.com/~lynn/2007d.html#email800920
in this post
https://www.garlic.com/~lynn/2007d.html#17 Jim Gray Is Missing
then there was rapidly spreading rumor that a few top executives had gotten keyboards for email. then all of a sudden the annual 3270 terminal allocation was getting diverted to desks of executives and management (even tho almost none of them actually ever did their own email).
PROFS somewhat lowered the skill required ... so eventually lots of these terminals got the PROFS main menu burned into the screen.
there were a number of rounds of this ... when new PCs with terminal emulation came out ... the managers and executives needed the latest ... whether they used them or not. There were several examples where departments got (newer) PS2/486 with (larger) 8514 screens for "real" work & development ... then the machines would get diverted to manager's office to spend their day doing nothing ... other than bringing up the profs menu screen 1st thing in the morning and then being shutoff at night ... keyboard having become something of a management status symbol (while secretaries continued to do majority of email).
a couple recent posts mentioning PROFS
https://www.garlic.com/~lynn/2009.html#8 Is SUN going to become x86'ed ??
https://www.garlic.com/~lynn/2009.html#23 NPR Asks: Will Cloud Computing Work in the White House?
https://www.garlic.com/~lynn/2009k.html#0 Timeline: The evolution of online communities
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: August 7, 1944: today is the 65th Anniversary of the Birth of the Computer Newsgroups: alt.folklore.computers Date: Tue, 18 Aug 2009 09:44:24 -0400Dave Garland <dave.garland@wizinfo.com> writes:
my vague (somewhat senior moment) recollection of the university's telco box (in the machine room) ... was approx rack size, front panel about eye level had buttons that could be pressed for each number (busying/locking it out), button would also light up when in use (or flash if busied/locked out?). my recollection was that the "hunt group" programming was done in the box.
i'm not positive ... but possibly "modems" were also part of that box(?) ... there were some gray bell handsets also ... but I don't remember whether they were additional lines or not (since the rack had handset as part of the box).
wiki page mentions Bell 103A & Bell 103A2 ("important boost to use of
remote low-speed terminals such as the KSR33, the ASR33, and the IBM
2741").
https://en.wikipedia.org/wiki/Modem
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: lynn@GARLIC.COM (Anne & Lynn Wheeler) Subject: Re: SNA: conflicting opinions Newsgroups: bit.listserv.ibm-main Date: 18 Aug 2009 08:09:34 -0700John A Pershing Jr <pershing@alum.mit.edu> writes:
or maybe the "internal network" was a reference to a "computer
network" ... as opposed to "terminal network". misc. past posts
mentioning "internal network".
https://www.garlic.com/~lynn/subnetwork.html#internalnet
there was CCDN, which was a corporate terminal front-end ... that allowed branch office (and other) terminals to interconnect to a number of internal online services. HONE had a front-end CCDN box ... but there a number of other online services that also had CCDN front-end terminal connectivity ... like RETAIN (CCDN provided the terminal networking overlaying the SNA terminal communication).
HONE started out as some number of internal virtual machine cp67 datacenters that would provide the ability for SEs in the branch office to play with operating systems. This was after then 23jun69 unbundling announcement ... which included starting to charge for SE time ... which shutoff one of the major avenue for branch office skill maintenance ("hands-on" at customer location). When initial 370 was announced (before virtual memory announcement), there were a few new instructions. HONE CP67 systems got add-ons that simulated the new instructions so "370" operating systems could be run in cp67 virtual machines (on 360/67).
The science center had also done port of apl\360 to cp67/cms for cms\apl. HONE started deploying some number of APL-based sales & marketing support applications. Eventually the sales & marketing support came to dominate all HONE use and the virtual machine for SEs activity withered away.
One of my hobbies during the 70s and part of the 80s was providing HONE
with highly customized operating system kernels (periodically dropping by
and resolving problems/bugs, etc). misc. past posts mentioning HONE
https://www.garlic.com/~lynn/subtopic.html#hone
misc. old email mentioning HONE (also some referencing HONE on VNET
computer network ... as opposed to terminal network)
https://www.garlic.com/~lynn/lhwemail.html#hone
in the mid-70s the US HONE datacenters were consolidated in single location ... and the HONE vm370 got a lot of enhancements for single-system-image ... a large collection of (eventually multiprocessor) machines sharing a common large disk farm.
The science center had done a lot of activity on performance tuning,
workload profiling, performance simulation (eventually evolving into
capacity planning). One of these was made available on HONE as the
performance predictor ... branch office could characterize customer
(mainframe) configuration & workload ... and then ask what-if questions
regarding changes to hardware or workload. misc. past posts mentioning
science center
https://www.garlic.com/~lynn/subtopic.html#545tech
In any case, a modified version of the performance predictor was created that monitored workload & availability of individual systems and interacted with local CCDN terminal front-end ... to route incoming terminal sessions to specific system (providing workload balancing and availability).
for the fun of it ... somebody in YKT had done a "virtual terminal" enhancement to vm370 ... which allowed a virtual machine process that created virtual 3270s ... that could simulated on the local machine or carried over communication lines to other machines. This was released as product called PVM. RLSS provided a gateway into CCDN.
there was an internal CMS application that could make use of virtual
terminal interface ... which also included a HLLAPI-like language (well
before PC and PC-based terminal emulation). In any case, it was then
possible to write scripts that went into RETAIN to retrieve PUT bucket
info. old post with bits & pieces of PARASITE & STORY
https://www.garlic.com/~lynn/2001k.html#35
old post with a RETAIN "story" (i.e. script to to connect thru various
internal terminal communication systems, login in to RETAIN, retrieve
information, log-off, etc
https://www.garlic.com/~lynn/2001k.html#36
at the time of APPN announce ... both the person responsible for APPN
and I reported directly to the same executive (this was after Andy went
to AWD, prior to that I was direct report to Andy). from long ago and
far away (after having moved to AWD) ...
Date: Thu, 1 Sep 88 10:01:48 est
From: wheeler
Subject: sna networking
if you are interested i can send you a overview of a system that would
allow sna networks to have full-blown appn superset but back in '85
... only it wasn't done by cpd ... but by a very good customer of ibm.
after i made the presentation to the sna architecture reveiw board ...
their response was to start a task-force on how to make pu5 so
difficult and obtuse, that nobody would ever think of attempting such
a thing again.
cpd even nonconcurred with appn and refused to sign-off on the
announcement. The comprimise that was reached with cpd a week before
appn announcement was that there would be nothing in the announcement
information that in any way associated appn with sna (it has only been
relatively recently that cpd has been brought around to letting the
term sna be in any way associated with appn).
however, that is somewhat orthoganal. the unix industry standard is ip
and we need to provide the best world class ip support in existance if
we are to ship a unix product. after that, given that we also wish to
satisfy customer requirements for some ibm mainframe connectivity ...
we also need to have sna support (although even there, ibm mainframe
offerings of ip for the technical unix market are coming on very
fast).
----------
Words of wisdom from Zippy:
I'm ZIPPY the PINHEAD and I'm totally committed to the festive mode.
... snip ... top of post, old email index
as mentioned in
https://www.garlic.com/~lynn/2009l.html#17 SNA: conflicting opinions
I had added rfc1044 support to mainframe tcpip support ... getting
something like factor of thousand times improvement in the bytes
moved per instruction executed ... misc. past posts mentioning 1044
support
https://www.garlic.com/~lynn/subnetwork.html#1044
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
Search the archives at http://listserv.ua.edu/archives/ibm-main.html
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: SNA: conflicting opinions Newsgroups: bit.listserv.ibm-main Date: Tue, 18 Aug 2009 11:48:41 -0400John A Pershing Jr <pershing@alum.mit.edu> writes:
part of the issue was that after future system was canceled ... misc.
past posts
https://www.garlic.com/~lynn/submain.html#futuresys
there was mad rush to get products back into 370 product pipeline (since future system was going to be completely different & replace 360/370 ... there was no point in putting additional effort into 370). part of that involved the mvs group convincing corporate that the vm370 product be killed, the vm370 development group shutdown and all the people moved to POK to support mvs/xa ... or otherwise mvs/xa wouldn't be able to meet its schedule.
eventually endicott managed to convince corporate to acquire the vm370 product mission ... but they had to reconstitute a development group from scratch.
IUCV was involved in all that. There had been a superset of IUCV deployed on internal systems for quite some time ... but the new group in Endicott decided to re-invent their own. Part of this required IUCV & related mechanisms to go thru a series of enhancments and product releases before they came close to matching the functionality of the original internal implementation.
One of my hobbies was doing my own internal product release & support
(including HONE) for internal datacenters. some old email discussing
move from cp67 to vm370 of lots of my enhancements (mostly during
the future system period)
https://www.garlic.com/~lynn/2006v.html#email731212
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430
including picking up and merging (IUCV superset) special message (from vm370 datacenter in POK). The original special message had been implemented on cp67 at Pisa science center and then later ported to vm370.
i had to do something similar in the early 80s when I was doing HSDT
project
https://www.garlic.com/~lynn/subnetwork.html#hsdt
RSCS was basically limited to 56kbit since couldn't get 37xx controller with more than that. RSCS also had some amount of serialization in the vm370 kernel interface ... that could limit RSCS aggregate thruput to 5-6 4k records/second.
In HSDT, I had multiple (full-duplex) T1 and higher speed links ... and using RSCS for some of the links, could run into thruput bottlenecks because of the serialization mechanism (maybe 20kbytes-30kbytes/sec). I needed 300kbytes/sec per full-duplex T1 links (and other links required even higher thruput). So in the early 80s, I needed several mbytes/sec.
This is recent post going into some detail of replacing that whole
serialization mechanism
https://www.garlic.com/~lynn/2009h.html#63 Operating Systems for Virtual Machines
part of the effort was lifting a whole part of the vm370 kernel implemented in assembler ... moving it into a virtual address space ... re-implementing it in vs/pascal ... allowing the existing synchronous API to work ... as well as allowing asynchronous mode of operation ... and making it run 10-100 times faster (including eliminating all buffer copies).
recent post mentioning 37xx 56kbits, fat pipes, and other issues with
the communication group (in the early & mid-80s)
https://www.garlic.com/~lynn/2009l.html#24 August 7, 1944: today is the 65th Anniversary of the Birth of the Computer
from above ...
In that time-frame, hsdt project was getting some equipment built to spec ... one friday before I was to leave for trip to the other side of the pacific ... somebody in the communication group distributed an announcement for new online computer discussion group on the subject of "high-speed" communication ... that included the following definition:
low-speed <9.6kbits medium-speed 19.2kbits high-speed 56kbits very high-speed 1.5mbitsthe following Monday morning, on the wall of a conference room on the other side of the pacific was:
low-speed <20mbits medium-speed 100mbits high-speed 200-300mbits very high-speed >600mbits... snip ...
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: lynn@GARLIC.COM (Anne & Lynn Wheeler) Subject: Re: SNA: conflicting opinions Newsgroups: bit.listserv.ibm-main Date: 18 Aug 2009 09:15:42 -0700Anne & Lynn Wheeler <lynn@garlic.com> writes:
except, instead of moving it from the virtual machine into the cp kernel
to make it run faster ... i moved it from the cp kernel into a virtual
machine and made it run significantly faster ... including no buffer
copies and being able to sustain megabyte or more per sec on 4341
machine. recent post/thread on the subject:
https://www.garlic.com/~lynn/2009h.html#63 Operating Systems for Virtual Machines
note this is totally different than the later rfc1044 work for
mainframe tcp/ip ... also able to sustain megabyte or more per sec on
4341 machine.
https://www.garlic.com/~lynn/subnetwork.html#1044
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: lynn@GARLIC.COM (Anne & Lynn Wheeler) Subject: Re: SNA: conflicting opinions Newsgroups: bit.listserv.ibm-main Date: 18 Aug 2009 09:59:40 -0700John A Pershing Jr <pershing@alum.mit.edu> writes:
as undergraduate in the 60s ... i got to do a lot of os/360 enhancements at the univ ... i would take stage1 output and carefully re-arrange the stage2 sysgen cards to optimally place datasets and PDS members. For some amount of the univ. workload, I was able to obtain nearly 3-fold thruput improvement (because of the improvement in disk arm motion).
I also got to rewrite large amounts of the cp67 kernel code ... for pathlength optimization, dyanmic adaptive resource management, virtual memory management and page replacement, order arm seek queuing ... and loads of other stuff.
part of old share presentation at fall '68 share meeting ... mentioning
some of the cp67 work and the mft14 work:
https://www.garlic.com/~lynn/94.html#18
the univ. library then got a ONR grant to do a computerized catalog
... and IBM selected them to be one of the betatest sites for the
original CICS product. I got tasked to support/debug that CICS
deployment. Misc. past posts mentioning cics &/or bdam
https://www.garlic.com/~lynn/submain.html#bdam
More than decade later ... when Jim was departing to Tandem ... he
attempted to palm off both consulting with the IMS group ... as well as
talking to customers about relational DBMS ... some of this discussed in
past posts about system/r (original relational/sql implementation)
https://www.garlic.com/~lynn/submain.html#systemr
some old email related to the subject:
https://www.garlic.com/~lynn/2007.html#email801006
https://www.garlic.com/~lynn/2007.html#email801016
also mentioned in this recent post
https://www.garlic.com/~lynn/2009l.html#41
this was in the same period that I was also starting HSDT effort
https://www.garlic.com/~lynn/subnetwork.html#hsdt
I originally joined the science center in cambridge ... and then
transferred to SJR in san jose. Besides getting to play in research,
they let me play in the Los Gatos VLSI lab, and DBMS stuff down
in STL, I also got to play disk engineer in bldgs. 14 & 15 ... some
past posts mentioning getting to play disk engineer
https://www.garlic.com/~lynn/subtopic.html#disk
When I became Andy's direct report ... I got to stay in san jose (even tho carried on the books in ykt) with offices in various bldgs. in san jose area (including almaden after the move up the hill from bldg. 28). At the time, there were some number of people that worked on projects Andy was backing ... but didn't actually report to him. That changed when he became head of AWD.
One of the things I also did at the univ. for cp67 was add tty/ascii
terminal support (to the existing 2741 & 1052 terminal support). The
existing support played games with dynamically figuring out the terminal
type and used the 2702 SAD command to dynamically re-associate specific
line-scanner with specific port/line. I tried to add the tty/ascii
support in a similar dynamic manner. This all worked for
leased/hardwired lines ... but didn't quite work for dialed lines ...
since 2702 had shortcut and hardwired the linespeed/oscillator to each
port. recent thread/posts discussing some of this
https://www.garlic.com/~lynn/2009l.html#34 August 7, 1944: today is the 65th Anniversary of the Birth of the Computer
https://www.garlic.com/~lynn/2009l.html#42 August 7, 1944: today is the 65th Anniversary of the Birth of the Computer
this was much of the motivation for the university to start its own
clone controller project; reverse engineer the channel interface, build
a clone channel interface board for interdata/3 and program the
interdata/3 to emulate the 2702 (but with additional feature that it
could dynamically determine baud rate). there was a subsequent article
blaming four of us for the clone controller business (interdata started
selling it as a product ... later sold under perkin/elmer logo when PE
bought interdata)
https://www.garlic.com/~lynn/submain.html#360pcm
clone controller business has been blamed as major motivation for the
future system effort
https://www.garlic.com/~lynn/submain.html#futuresys
as well as the convoluted nature of the pu4/pu5 interface ... there were lots of jokes in other product organizations, attempting to build "SNA" complient interfaces ... that it didn't make any difference whether it was built to a SNA specification ... the only thing that really mattered was whether it worked with pu4/pu5.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: lynn@GARLIC.COM (Anne & Lynn Wheeler) Subject: Re: SNA: conflicting opinions Newsgroups: bit.listserv.ibm-main Date: 18 Aug 2009 10:35:19 -0700John A Pershing Jr <pershing@alum.mit.edu> writes:
at interop '88 ... there was definitely government OSI shadow over
everything. misc. past posts mentioning Interop '88 ... lots of
nominally "tcp/ip" companies showing OSI products in their booths:
https://www.garlic.com/~lynn/subnetwork.html#interop88
tcp/ip had been the technology basis for the modern internet, NSFNET backbone was the operational basis for the modern intenet, and CIX was the business basis for the modern internet.
misc. past posts mentioning NSFNET backbone
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
old email mentioning NSFNET
https://www.garlic.com/~lynn/lhwemail.html#nsfnet
CIX wiki page ... CIX early 90s ... with growth of CIX, it was pretty
much death blow to the various GOSIP mandates:
https://en.wikipedia.org/wiki/Commercial_Internet_eXchange
the above mentions YKT took point on NSFNET backbone.
Note this was different than what we had been doing in HSDT (including support T1 and faster speed links). As I've mentioned we weren't allowed to bid on NSFNET (even tho director of NSF wrote letter to the corporation, copying the CEO asking for our participation ... he also mentioned that what we already had running was at least five yrs ahead of all NSFNET backbone bid submissions).
We had strongly pushed (at least) T1 for NSFNET backbone (we already had T1 links deployed, fiber, satellite, copper) and T1 was included in the RFP. What actually got deployed was 440kbit links ... not T1 links ... although possibly to try and meet the letter of the RFP, T1 trunks were deployed with telco multiplexors that ran multiple links over the T1 trunks (we used to joke that they could possibly claim that it was a T5 network ... since there was some possibility that the T1 trunks could possibly be further multiplexed on a T5 trunk at some point in the telco infrastructure).
YKT also took point on NSFNET-2 backbone RFP ... for upgrade of the T1 (trunks) to T3. A blue team of 20-30 or so people from 5-6 labs around the world were put together. I was the red team. At the review, I got to present first; then the blue team. 5-10 minutes into the blue team presentation, the person running the review pounded on the table and said nothing but the blue team proposal would be allowed to go forward (in reference to red team proposal was obviously far superior, and there should be no more discussion on the subject). I got up and walked out. One or two other people got up and walked out also.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: lynn@GARLIC.COM (Anne & Lynn Wheeler) Subject: Re: The Death of Servers and Software Newsgroups: bit.listserv.ibm-main Date: 18 Aug 2009 16:53:43 -0700E99071@JP.IBM.COM (Timothy Sipples) writes:
with 10:1 server consolidation there possibly is 10:1 long-term decline in server market revenues ... since large volume operations are making ten times increased productivity of their hardware (using virtualization to get 10:1 server consolidation) ... so they would only need 1/10th as much hardware in the future (as they had required in the past ... which had got them in situation where they had humongous numbers of servers with 10% utilization or less, creating the 10:1 server consolidation opportunity).
during transition phase ... server sales might even drop to zero ... in the period that large volume operations possibly are finding they can consolidate all their operations onto 1/10th the number of servers ... there could be a long period where they are able to use the remaining, idle 90% of the servers (that had been moved off of) ... for new applications ... in lieu of having to buy new servers. Until new uses have been found for that potential 90% of current, idle server install base (that has freed up with 10:1 server consolidation from virtualization) ... server purchases possibly drop to zero.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Behind Menuet: an OS written entirely in assembler Newsgroups: alt.folklore.computers Date: Wed, 19 Aug 2009 09:50:32 -0400Behind Menuet: an OS written entirely in assembler
but then can you say "cp67"? ... much more common 40+ yrs ago.
We talk to the developers behind MenuetOS: an operating system written
entirely in assembly language
http://www.goodgearguide.com.au/article/315421/we_talk_developers_behind_menuetos_an_operating_system_written_entirely_assembly_language
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Hacker charges also an indictment on PCI, expert says Date: 19 Aug, 2009 Blog: Payment Systems NeworkHacker charges also an indictment on PCI, expert says
from above:
also serves as an indictment of sorts against the fraud conducted by
PCI -- placing the burden of security costs onto retailers and
card processors when what is really needed is the payment card
industry investing in a secure business process.
... snip ...
in the mid-90s, the x9a10 financial standard working group had been
given the requirement to preserve the integrity of the financial
infrastructure for all retail payments ... after detailed end-to-end,
threat & vulnerability studies of the various environments ... this
was effectively what x9a10 concluded also ... which resulted in the
x9.59 financial standard; some references
https://www.garlic.com/~lynn/x959.html#x959
it didn't do anything about preventing skimming, evesdropping, data breaches, etc ... it just slightly tweaked the paradigm .. eliminating the ability of crooks to use the information for fraudulent transactions
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: August 7, 1944: today is the 65th Anniversary of the Birth of the Computer Newsgroups: alt.folklore.computers Date: Wed, 19 Aug 2009 10:47:29 -0400Charles Richmond <frizzle@tx.rr.com> writes:
the other end of the rs232 wires could be racks of individual telco "datasets" ... individual handsets (one per phone line) that have rs232 coming out (running to the computer box). there was also a rack sized telco box that would have large number of lines (full T1 telco wire coming in for 24 circuits ... central office connection might have enabled only a subset of the 24 circuits).
later there were single boxes that take telco T1 wire, handle the multiplexing of 24 phone circuits directly into computer.
quicky web search turns up article "Understanding T1 Circuits" ... that
talks about 24 circuits carried on T1:
http://ezinearticles.com/?Understanding-T1-Circuits&id=980254
my recent posts about T1 links (and faster speed, in the 80s) ... some
of them were (data) clear-channel ... with no channelization. We did
have a period where some of the telco started mandating that some of our
T1 "clear-channel" circuits ... had to aad support for the "193rd bit"
convention (for ones density).
https://www.garlic.com/~lynn/2009l.html#7 VTAM security issue
https://www.garlic.com/~lynn/2009l.html#14 August 7, 1944: today is the 65th Anniversary of the Birth of the Computer
https://www.garlic.com/~lynn/2009l.html#24 August 7, 1944: today is the 65th Anniversary of the Birth of the Computer
https://www.garlic.com/~lynn/2009l.html#44 SNA: conflicting opinions
https://www.garlic.com/~lynn/2009l.html#47 SNA: conflicting opinions
wiki page discussing DS1 and 193rd bit:
https://en.wikipedia.org/wiki/Digital_Signal_1
we went thru a period of cobbling together equipment that would supply
the 193rd convention for previous clear-channel T1 circuit.
https://www.garlic.com/~lynn/2003d.html#13 COMTEN- IBM networking boxes
https://www.garlic.com/~lynn/2004l.html#5 Xah Lee's Unixism
https://www.garlic.com/~lynn/2005d.html#1 Self restarting property of RTOS-How it works?
https://www.garlic.com/~lynn/2005n.html#36 Code density and performance?
https://www.garlic.com/~lynn/2006k.html#55 5963 (computer grade dual triode) production dates?
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: August 7, 1944: today is the 65th Anniversary of the Birth of the Computer Newsgroups: alt.folklore.computers Date: Wed, 19 Aug 2009 13:10:52 -0400Charles Richmond <frizzle@tx.rr.com> writes:
some number of univ. had terminal rooms/locations ... and wouldn't pay phone costs for connectivity ... but had the terminals "hard-wired" (direct line from terminal to computer ... w/o any telco involvement).
for remote rooms/locations around campus ... rather than run line for every terminal back to computer room ... they could have various kind of terminal concentrators at the remote location ... with single line into the computer room (something akin CAT4 star-wired connections ... whether t/r or enet).
the box back in the computer room might simulate physical wire ... turning packets from different terminals into signals on different lines going into the computer controller box. others would have bulk connection into the computer with computer separating out different terminals based on some kind of header. large scale commercial versions of some of this stuff was TYMNET ... where remote TYMNET boxes might also have dial-in numbers at POPs around the world.
There was an (infamous) case of campus infrastructure at cornell in
ithaca ... that had an extender box ... being able to remote a nominally
machine room box to remote locations around campus. the boxes
communicated wirelessly ... and used a CRC chip as mechanism of getting
ones density in the signal. Turns out the IBM mainframe boxes used the
same CRC algorithm for recongizing transmission bit errors. The net was
the multiple application of CRC resulted in not recognizing some number
of transmission bit errors. detailed discussion of the problem (from
Tymshare's VMSHARE archive dated 7/18/86):
http://vm.marist.edu/~vmshare/browse.cgi?fn=CRC-FAIL&ft=PROB
tymnet wiki page ... tymnet was part of tymshare ... used for providing
cost effective remote terminal access into tymshare's (vm370 virtual
machine) online system:
https://en.wikipedia.org/wiki/Tymnet
telenet was similar ... wiki page:
https://en.wikipedia.org/wiki/Telenet
from above:
The switching nodes were fed by Telenet Access Controller (TAC) terminal
concentrators both colocated and remote from the switches. By 1980,
there were over 1000 switches in the public network. At that time, the
next largest network using Telenet switches was that of Southern Bell,
which had approximately 250 switches.
... snip ...
and
ConnNet
https://en.wikipedia.org/wiki/ConnNet
DATAPAC
https://en.wikipedia.org/wiki/DATAPAC
above references URL (gone 404) about how DATAPAC was used in
univ. setting as well as waterloo wiki page:
https://en.wikipedia.org/wiki/University_of_Waterloo
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Hacker charges also an indictment on PCI, expert says Date: 19 Aug, 2009 Blog: Payment Systems Neworkre:
After the x9.59 standard work, we've used some number of metaphors to characterize the current paradigm:
• security proportional to risk vulnerability; in the current paradigm, the value of the information to the merchant is the profit on the transaction (possibly a couple dollars) and the value of the information to the processor can be a few cents per transaction ... while the value of the information to the crooks can be the credit limit and/or account balance (the crooks attacking the infrastructure may be able to outspend the merchant & processor defenders by a factor of one hundred times)
• dual-use vulnerability; in the current paradigm, the knowledge of the account number may be sufficient to perform a fraudulent transaction (effectively authentication, as such it needs to be kept confidential and never divulged anywhere) ... while at the same time the account number needs to be readily available for a large number of business processes. The conflicting requirements (never divulged and at the same time readily available) has led to comments that even if the planet was buried under miles of information hiding encryption, it still couldn't prevent information leakage.
there have also been past studies that as much as 70% of "identity theft" incidents have involved insiders. This plays a factor in the "dual-use vulnerability" metaphor ... where there are large number of places in the numerous business processes where "insiders" are required to have access to the data. It also plays a factor in the security proportional to risk metaphor where it would be relatively obvious for crooks to bribe "insiders".
One of the issues with the long standing involvement of insiders in such vulnerabilities is leveraging the possibility of internet attack (by outsiders) to obfuscate insider involvement
... and somewhat related item:
PandaLabs: 600% Rise In Users Hit By ID Theft Malware; Economic
crisis, black market sales of credit card numbers, Paypal or Ebay
accounts could be to blame
http://www.darkreading.com/security/antivirus/showArticle.jhtml?articleID=219400673
another item:
Experts: More Heartland-Style Breaches Expected; Despite Arrests,
Analysts say 'This is Probably Just the Start'
http://www.bankinfosecurity.com/articles.php?art_id=1717&rf=081909eb
Despite Gonzalez Indictment, No Easy Answers for Merchants
http://www.digitaltransactions.net/newsstory.cfm?newsid=2296
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: another item related to ASCII vs. EBCDIC Newsgroups: alt.folklore.computers Date: Wed, 19 Aug 2009 14:24:48 -0400scott@slp53.sl.home (Scott Lurndal) writes:
re:
https://www.garlic.com/~lynn/2009l.html#23 another item related to ASCII vs. EBCDIC
https://www.garlic.com/~lynn/2009l.html#25 another item related to ASCII vs. EBCDIC
later ... one of the 10 impossible was having to take down whole res. system at least couple times a month for several hrs to rebuild various components (including "routes" database) ... the rewrite of "routes" included eliminating having to take a hit when updating routes information.
as things became more global ... no matter when the outage (which to be 3rd shift weekend) was scheduled ... it would affect somebody, somewhere in the world. my son had also mentioned that scheduled outages sometimes would extend/overrun into 1st shift ... when he was trying to use the res. system to schedule freight (he wasn't limited to scheduling frt on AA, but could also schedule on most other carriers).
One of the other things ... there were some airports notorious for pilferage (top 4-5 much worse than any others). it wasn't just passenger baggage ... but also included freight. For high value shipments into the worst 4-5 airports ... there was constant battle of strategies to obfuscate high value freight from pilferage by airport workers (including decoy shipments ... keeping statistics on what kinds of things were airport workers most likely to fiddle with, how they were packaged, etc). Sometimes ... whole airfreight shipping containers would simply disappear (at least one case involving one of the worst airports, the company had a representative when the container was loaded into belly of a non-stop flt ... and a company representative was at the arriving flt within 30mins of touch down ... and there was no trace of the container to be found).
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM halves mainframe Linux engine prices Newsgroups: bit.listserv.ibm-main Date: Wed, 19 Aug 2009 19:14:16 -0400John A Pershing Jr <pershing@alum.mit.edu> writes:
CP/40 (and then CP/67) development went on in parallel with CMS (when it was still called "cambridge monitor system") ... with CMS originally running stand-alone on 360/40 ... which went on in parallel with CP/40 using the 360/40.
When some people from the science center came out to the univ. to install CP/67, CP/67 source was still being kept on OS/360, assembled on OS/360 ... and physical "TXT" decks punched ... with cards for kernel build being kept in card tray (individual TXT decks had colored markers across the top to selective replace the TXT cards for specified routines. It wasn't until later in '68 that the CP/67 group felt that CMS was stable enough to move CP/67 over to CMS (and off OS/360).
For some trivia ... the claim is that the person responsible for CP/M (early personal computer system) ... had cribbed the name from CP67/CMS ... he had used CP67/CMS at NPG school at Monterey in early 70s.
other CMS trivia was that in the transition from CP67 to VM370, CMS had its name changed to "conversational monitor system" ... and artificially crippled so it would not longer boot/ipl on the bare hardware.
One of the problems over the decades was having people from traditional operating system backgrounds come to work on VM ... was that the CP heritage from the earliest beginnings was constantly removing stuff from the kernel ... while people with traditional operating system backgrounds would be taking shortcuts and be constantly putting things into the kernel.
I had done detailed structural and code-flow analysis of CP kernel
structure in early to mid 80s and there were huge amount of things (that
never belonged) had been added to the kernel and it was turning into a
mess of spaghetti code ... old post with some discussion of the
analysis
https://www.garlic.com/~lynn/2001m.html#53
move recent posts about removing stuff from cp kernel to make things run
faster:
https://www.garlic.com/~lynn/2009l.html#36 Old-school programming techniques you probably don't miss
https://www.garlic.com/~lynn/2009l.html#44 SAN: conflicting opinions
so part of the reason for taking a large component from the vm370
kernel and moving it into virtual machine ... while increasing
thruput by possibly factor of 100 times ... was partly to
make HSDT links run faster
https://www.garlic.com/~lynn/subnetwork.html#hsdt
and partly to demonstrate that lots of the code in the kernel wasn't justified being there.
early days of TPF (not long after the name change) ... was that TPF didn't support multiprocessors ... and the 3081 was originally to be a multiprocessor only machine ... eventually a partially crippled 3081 uniprocessor was produced, 3083 ... primarily for TPF market ... however, until that happened, for TPF to run on 3081s, it had to run under vm370.
wiki acp page:
https://en.wikipedia.org/wiki/IBM_Airline_Control_Program
wiki tpf page:
https://en.wikipedia.org/wiki/Transaction_Processing_Facility
For environment that was nearly all TPF execution ... that resulted in
the 2nd processor being idle (unless two TPFs were run concurrently).
The originally vm370 multiprocessor support came from a design/project
that I had done in 1975 for a 5-way SMP (that was never announced or
shipped) ... which I claimed had the optimal kernel pathlength overhead
for supporting multiprocessor operation (at least for an environment
with enough virtual machines to keep all processors running at 100%
utilization). lots of past posts mentioning SMP support and/or
the compare&swap instruction
https://www.garlic.com/~lynn/subtopic.html#smp
In any case, for the (3081, multiprocessor) TPF market segment ... one of the vm370 releases had a major change in the way multiprocessor support was implemented ... which enable a lot of cp kernel code to be executed concurrently/asynchronously on the 2nd (idle) processor, overlapped with TPF virtual machine execution. This significantly increased the multiprocessor kernel pathlengths ... but was traded off with gaining a little TPF increased throughput (since the increased pathlengths was offset with asynchronous use of the 2nd/idle processor). The TPF market segment thot it was great ... however, all the other VM370 (multiprocessor) customers saw a ten percent throughput degradation.
One of the issues with ACP uses of TPF ... was that the data management
facilities are rather primitive and they have had to take the system
down on regular basis for major updates (potentially once a week). We
had been brought in to one of the major res. systems to look at "fixing"
their "ten impossible" problems (including the scheduled outages). Some
recent posts on the subject:
https://www.garlic.com/~lynn/2009l.html#25
https://www.garlic.com/~lynn/2009l.html#54
we had another episode with (different) res. system ... for a short stint my wife was chief architect for amadeus (the major european res system that started off being built from the old eastern airline res system). One of the problems was that my wife sided with most of europe choosing x.25 as major terminal connectivity. This led to the SNA forces getting her replaced ... however, it didn't do any good, Amadeus went with x.25 anyway.
wiki amadeus pages:
https://en.wikipedia.org/wiki/Amadeus_CRS
https://en.wikipedia.org/wiki/Amadeus_IT_Group
current amadeus web page:
http://www.amadeus.com/amadeus/amadeus.html
other wiki res system pages:
https://en.wikipedia.org/wiki/Sabre_%28computer_system%29
https://en.wikipedia.org/wiki/Galileo_CRS
https://en.wikipedia.org/wiki/Worldspan
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Linkedin digital certificate today Date: 20 Aug, 2009 Blog: Information Security Networkdiscussion from month ago:
Linkedin digital certificate expired today
Anybody else notice that Linkedin (Thawte SGC CA) digital certificate expired today?
for a little drift ... lots of past posts mentioning SSL digital
certificates
https://www.garlic.com/~lynn/subpubkey.html#sslcert
we had been called in to consult with a small client/server startup that wanted to do payment transactions on their server ... they had also invented this technology called SSL they wanted to use ... the result is now frequently referred to as "electronic commerce". As part of the "electronic commerce" activity, we had to do end-to-end business walk-thru/audits of several of these new entities calling themselves Certification Authorities.
Fairly shortly we coined the term certificate manufacturing. Some number of the operations were corrupted the reference to Certification Authorities for CA, to Certificate Authorities. Nominally a certificate represents somewhat of certification ... analogous to what a diploma is suppose to represent something. All the disclaimers with regard to any certification somewhat downgrades the meaning of a certificate, analogous to "diploma mills".
Linkedin now has new ssl certificate that is good from 7/6/2009 until 7/6/2010
...
and recent news article:
Why I Refuse to Update My Website Certificate
http://www.darkreading.com/blog/archives/2009/08/why_i_refuse_to.html
from above:
Anyone can buy a certificate for their Website by indicating they own
the domain, with little verification, and in some past cases, no
verification at all. Certificates cannot be completely trusted to
verify ownership.
... snip ...
As part of the effort to apply "SSL" to payment business process, there were numerous assumptions regarding things necessary for integrity of the process ... some number which were almost immediately violated.
Some past discussions somewhat related to the end-to-end operation of
related "SSL" processes
https://www.garlic.com/~lynn/subpubkey.html#catch22
Very shortly after posting the above ... I visited a major corporation
URL ... and firefox hiccuped with message that it was an untrusted
site. It only lasted for five minutes ... but during that period,
firefox was claiming mismatch between the URL and the SSL digital
certificate:
issued to:
akamai.net
Akamai Technologies, Inc
issued by
GTE CyberTrust Global Root
GTE Corporation
..... snip ...
Basically Akamai webhosting has been contracted to host (impersonate) some number of large corporate websites. Several times in the past, I've seen the Akamai corporate impersonation bleed through.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: IBM halves mainframe Linux engine prices Date: 20 Aug, 2009 Blog: Mainframe Experts NetworkIBM halves mainframe Linux engine prices
from above:
The IBM mainframe may not have a lot of direct competition when it
comes to z/OS-based batch and transactional work, but the story is
different when it comes to Linux. There's plenty of competition
... snip ...
there is thread in the ibm-main mailing list (started on bitnet in the
80s) on the same article, which has been getting quite a bit of play
... one of my archived post in that fora:
https://www.garlic.com/~lynn/2009l.html#55
with regard to "transactional" work ... somewhat related recent post
in (linkedin) Financial Crime Risk, Fraud and Security
https://www.garlic.com/~lynn/2009l.html#20
makes references to this post last year with regard to get together
celebrating Jim
https://www.garlic.com/~lynn/2008p.html#27 Father of Financial (transaction) Daaprocessing
post in semi-related (ibm-main) thread about another news article
related to "server" market segment
https://www.garlic.com/~lynn/2009l.html#48
as to five-nines availability ... we also did the HA/CMP product
... some pasts posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
and I was then asked to write a section for the corporate continuous
availability strategy document. Unfortunately (at the time), neither
Rochester nor POK could meet the requirements and they had my section
pulled. When we were out marketing HA/CMP, I coined the terms
geographic survivability and disaster survivability (to differentiate
from disaster/recovery) ... some past posts
https://www.garlic.com/~lynn/submain.html#available
some of the business process re-engineering projects during the 90s had problems because the process implementation in the legacy code was no longer understood.
there were also billions spent in the financial industry to move processes off mainframes as part of eliminating the overnight batch window. The issue was that some number of the real-time transaction implementations from the 70s ... only partially completed the business process ... leaving the rest of the process (like settlement) to the overnight batch window.
In the 90s, with combination of increasing workload and globalization (shrinking size of window) ... extreme pressure was being placed on those (overnight) batch (window) operations. billions were spent on redoing implementations for straight-through processing using parallelization on large numbers of distributed "killer micros". The problem was that it wasn't until well into some of the deployments, that they became aware that the distributed programming technology being used, introduced factor of 100 times overhead (compared to the mainframe legacy batch implementation) ... totally swamping all anticipated throughput increases.
Those failures have contributed significantly to risk adverse attitude regarding major mainframe re-engineering efforts.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: ACP, One of the Oldest Open Source Apps Newsgroups: alt.folklore.computers,bit.listserv.ibm-main Date: Fri, 21 Aug 2009 09:11:26 -0400ACP, One of the Oldest Open Source Apps
from above:
"The Airline Control Program (ACP)", introduced by IBM around 1967,
predated the term 'open source' by decades.
... snip ...
An Abbreviated History of ACP, One of the Oldest Open Source Applications
http://www.itworld.com/print/75218
above doesn't mention the lack of SMP support during 3081 period ...
when (initially) all machines were going to be multiprocessor:
https://www.garlic.com/~lynn/2009l.html#55
mention 23jun69 unbundling announcement (starting to charge for
application software, SE services, other stuff ... however, the case
was made to still provide kernel software for free):
https://www.garlic.com/~lynn/submain.html#unbundle
some recent posts
https://www.garlic.com/~lynn/2009f.html#0 How did the monitor work under TOPS?
https://www.garlic.com/~lynn/2009f.html#18 System/360 Announcement (7Apr64)
https://www.garlic.com/~lynn/2009h.html#68 My Vintage Dream PC
https://www.garlic.com/~lynn/2009i.html#26 Why are z/OS people reluctant to use z/OS UNIX?
https://www.garlic.com/~lynn/2009i.html#37 Why are z/OS people reluctant to use z/OS UNIX?
https://www.garlic.com/~lynn/2009i.html#45 dynamic allocation
https://www.garlic.com/~lynn/2009j.html#18 Another one bites the dust
https://www.garlic.com/~lynn/2009j.html#68 DCSS addenda
https://www.garlic.com/~lynn/2009k.html#71 Hercules Question
https://www.garlic.com/~lynn/2009l.html#1 Poll results: your favorite IBM tool was IBM-issued laptops
https://www.garlic.com/~lynn/2009l.html#22 another item related to ASCII vs. EBCDIC
distraction of future system effort and allowing 370 product pipeline is
associated with letting clone processors getting foothold in the market
https://www.garlic.com/~lynn/submain.html#futuresys
which possibly contributed to decision to start to charge for kernel
software. after future system was killed & in the mad rush to get
stuff back into the 370 product pipeline ... some of the 370 software
I had been doing all during the future system period was selected for
release. One of the pieces, dynamic adaptive resource management got
selected to be guinea pig for kernel software charging ... being
packaged as separate kernel component ... some past posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
over the next several years ... more and more kernel components became chargee for ... until all kernel was priced ... eliminating having component packaging issues of charged & not charged.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: lynn@GARLIC.COM (Anne & Lynn Wheeler) Subject: Re: ISPF Counter Newsgroups: bit.listserv.ibm-main Date: 21 Aug 2009 06:59:48 -0700joarmc@SWBELL.NET (John McKown) writes:
in the 70s ... with real 3272 controller & 3277 keyboards ... there was a hardware hack done to mask the really annoying and frustrating half-duplex operation You unplug the 3277 keyboard from the 3277 head and plug in a small (specially built hardware box) fifo and then plug the keyboard into the FIFO. The FIFO box was able to queue keystrokes that would happen during any periods that the screen was being written.
With move to 3278/3279 ... lots of the electronics that had previously been in the 3277 terminal ... were moved backed in 3274 controller ... and lots of the human factor (hardware) hacks to the 3277 no longer worked. Fortunately it was still possible to configure 3277 to work with 3274 controller ... so I kept my 3277 until able to replace it with pc/at & terminal emulation.
Note that ANR (aka 3277) coax protocol had lot higher thruput than DCA
(aka 3278) coax protocol. So even after getting pc/at with software
emulation ... there was still preference for 3277 (ANR) adapter card
(so uploads/downloads ran significantly faster). couple past posts
(including reference to ANR having three times the thruput of DCA):
https://www.garlic.com/~lynn/2007r.html#10 IBM System/3 & 3277-1
https://www.garlic.com/~lynn/2007t.html#40 Why isn't OMVS command integrated with ISPF?
https://www.garlic.com/~lynn/2007t.html#42 What do YOU call the # sign?
https://www.garlic.com/~lynn/2008h.html#9 3277 terminals and emulators
https://www.garlic.com/~lynn/2008r.html#46 pc/370
there wasn't anything inherent in 3270 that was half-duplex ... it was the operation of the channel to controller and the controller to the head.
In the early 80s, STL lab was starting to burst at the seams ... and the decision was made to move 300 people from the IMS group to offsite building. They had looked at remote 3270 terminals into the STL lab vm370/cms development machines (not development for vm370/cms ... but development for lots of database and MVS products). Remote 3270 terminal thruput and operational characteristics where deemed totally unacceptable.
It was eventually decided to deploy NSC HYPERchannel ... as "channel
extender" over T1 microwave channel with local 3270 controllers and
local 3270 terminals at the remove site. I got to totally (re)write
the drivers to support HYPERchannel remote operation (another hobby,
they used to joke that I worked 1st shift in bldg. 28, 2nd shift in
the disk labs, 3rd shift in STL ... and weekends/4th shift at HONE
datacenter up the peninsula). I treated it had high speed full-duplex
operation ... with HYPERchannel boxes masking a lot of the half-duplex
operation of the 3270 controllers. misc. past posts related to HSDT
project, HYPERchannel boxes &/or supporting the IMS group:
https://www.garlic.com/~lynn/subnetwork.html#hsdt
STL had been in the habit of configuring machines with both 3270 controllers and disk controllers on same/all available channels. With the HYPERchannel extenders ... not only did the remote IMS group didn't notice any difference in system response (being at the end of T1 microwave link) ... system thruput actually improved. 3270 controllers were really slow in lots of ways. Having 3270 controllers on same channel with disk controllers ... resulted in lots of high channel busy (not so much data transfer but latency for 3270 controllers handling control operations) ... resulting in lots of channel contention with disk controllers. Moving the 3270 controllers directly off the mainframe channels ... replacing them with HYPERchannel boxes ... which were much faster and had much lower channel busy for identical 3270 channel operations ... resulted in increase in disk i/o thruput
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: lynn@GARLIC.COM (Anne & Lynn Wheeler) Subject: Re: ISPF Counter Newsgroups: bit.listserv.ibm-main Date: 21 Aug 2009 09:29:43 -0700lynn@GARLIC.COM (Anne & Lynn Wheeler) writes:
getting the 3270 controllers off the local channels and replacing them with faster HYPERchannel boxes (doing the same 3270 operations) ... reduced channel busy/contention for disk operations and overall system thruput improved 10-15% (w/o noticable degradation for the IMS group system response at the remote site).
that much system thruput improvement would easily justify replacing all 3270 controllers with HYPERchannel boxes ... even for local 3270s still in the bldg.
screen shot of the 3270 login screen for the IMS group (moved to remote
bldg.)
https://www.garlic.com/~lynn/vmhyper.jpg
old post with earlier analysis of the 3272/3277 versis 3274/3278
terminal response (separate from later emulated terminal measures where
3277/ANR protocol had three times the upload/download rate of DCA/3278)
https://www.garlic.com/~lynn/2001m.html#19 3270 protocol
part of the issue was that vm/cms with .1sec system response and .1sec 3272/3277 hardware response still met objective of less than .25 second response to the end user. typical TSO with a 1second response (or greater) hardly noticed the significant slowdown moving to 3274/3278 (ykt was touting a vm/cms system with .2 seconds system response ... I had sjr vm/cms systems with similar hardware & workload that had .11 seconds system response).
sometime after the STL IMS group was moved off-site ... the IMS FE RETAIN group in Boulder faced a similar prospect (of being forced to use remote 3270s). The move was to a bldg. that was line-of-site to the datacenter ... so T1 infrared/optical modems were used on the roofs of the two bldgs (instead of microwave used for STL) and similar HYPERchannel configuration. There was some concern that users would see outages with the optical modems during heavy fog & storms.
For these kind of T1 links ... we put multiplexors on the trunk and defined side-channel 56kbit circuit with (at the time Fireberd) bit error testers (rest of T1 was for 3270 terminal activity). The worst case (in boulder) was a white-out snow storm where nobody was able to get into the office ... which should up as a few bit errors per second.
NSC tried to get the corporation to release my HYPERchannel drivers ... but we couldn't get a corporate to authorize it ... so they had to re-implement the same design from scratch. One of the things I had done was to simulate unrecoverable transmission error as channel check (CC) ... which would get retried thru channel check error recovery.
Later after 3090s had been in the field for a year ... I was tracked down by the 3090 product administrator. It turned out that customers 3090 (both VM & MVS) were showing unexpected 3090 channel errors (something like aggregate total of 15-20 channel errors across the whole 3090 customer base for the year ... instead of only 4-5). The additional errors turned out to be because of HYPERchannel drivers (on various customer VM & MVS systems) simulating channel check. After a little research ... I determined that the erep path for IFCC (interface control check) was effectively the same as CC ... and convinced NSC to modify the drivers to reflect simulated IFCC instead of CC.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Hacker charges also an indictment on PCI, expert says Date: 20 Aug, 2009 Blog: Payment Systems Neworkre:
another article:
Getting Serious about SQL Injection and the TJX Hacker; Bloggers
wonder when IT people plan to get serious about SQL injection and
other security vulnerabilities.
http://www.pcworld.com/article/170457/getting_serious_about_sql_injection_and_the_tjx_hacker.html
for somewhat unrelated topic drift ... lots of past posts about the
original relational/sql implementation
https://www.garlic.com/~lynn/submain.html#systemr
one of the issues was that in the early part of this decade/century
there was a rather large chip-based deployment in the
US. Unfortunately it was vulnerable to yes card exploit ... some
number of past posts about yes card vulnerability
https://www.garlic.com/~lynn/subintegrity.html#yescard
some of the characteristics of the yes card vulnerability was such that it prompted comments about "billions were spent to prove that chips are less secure than magstripe".
the pilot then evaporated hardly a trace. since then there has been some number of comments about the cost of chip technology deployment in the states ... there is possibility of implication about costs of some number of failed deployments ... before finally getting it right (as opposed to the cost of a single deployment).
chip-based tokens were further tarnished in same time frame with (different) program involving chip-card enabled for home consumer internet use. possibly to enhance the uptake ... there was free gave-away of PC card acceptor device. Unfortunately these were serial port devices ... in a period when it was recognized requirement to transition to USB devices (because of the enormous customer support issues with installing after-market serial port devices). The resulting enormous customer support issues (with trying to work out the problems with these serial port card acceptor devices) ... resulted in a rapidly spreading opinion that chip-cards weren't practical in the consumer market segment.
I've pointed out that this appeared to be a case of fleeting financial institutional knowledge. In the early to mid 90s ... there were several financial institutions making presentations about moving online home banking (implemented via direct dial-up into institution modem banks) to the internet ... because it offloaded the enormous customer support associated with (serial-port) dial-up modems to the ISPs (although all the business/cash-management online dialup operations repeatedly said that they would never move to the internet because of the enormous security problems ... even with things like SSL).
In that time-frame we had been asked in to consult with small client/server startup that wanted to do payment transactions on their servers ... and the startup had invented this technology they called "SSL" they wanted to use; the result is now frequently referred to as "electronic commerce".
I would guess that somewhat based on "electronic commerce" ... we were then asked to participate in the X9A10 financial standard working group and the resulting x9.59 standard.
small disclaimer ... there is a lot of chip-related stuff in the aads
patent portfolio
https://www.garlic.com/~lynn/aadssummary.htm
I had given a talk on it a few years ago at intel developer's conference in the trusted computing track ... and the guy running the trusted computing group was in the front row ... I took the opportunity to comment that it was nice to see that over the previous couple yrs, the TPM chip was starting to look more and more like my chip; he quipped back that I didn't have a committee of 200 people helping me with the chip design.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Client Certificate UI for Chrome? -- OT anonymous-transaction Date: Fri, 21 Aug 2009 17:59:08 -0400 MailingList: cryptograpyOn 08/20/09 00:11, Ray Dillinger wrote:
this somewhat side-stepped whether it was linkable or not ... since it then was back at the financial institution whether the account number was linked to a person or anonymous ... but did meet privacy requirements for retail payments .... depending on gov. & financial institution with regard to any possible "know your customer" mandates ... a court order to the financial institution had the potential of revealing any linkage
There were a couple issues:
1) even as a relying-party-only digital certificate ... the digital certificate gorp resulted on the order of 100 times payload bloat for typical payment transaction payload size. there were two approaches a) strip the digital certificate off the payment transaction as early as possible to minimize the onerous payload penalty; b) financial standards looked at doing compressed relying-party-only digital certificates ... possibly getting the payload bloat down to only a factor of ten times (instead of one hundred times).
2) it was trivial to show that the issuing financial institution already had a superset of information carried in the relying-party-only digital certificate ... so it was redundant and superfluous to repeatedly send such a digital certificate back to the issuing financial institution appended to every payment transactions (completely redundant and superfluous was separate issue from representing factor of 100 times payload bloat).
so there were two possible solutions to the enormous payload bloat
a) just digital sign the transaction and not bother to append the redundant and superfluous relying-party-only certificate
b) the standards work on compression included eliminating fields that the issuing financial institution already possessed ... since it was possible to demonstrate that the issuing financial institution had a superset of all information in a relying-party-only digital certificate ... it was possible to compress the size of the digital certificate to zero bytes. then it was possible to mandate that zero byte digital certificates be appended to every payment transaction (also addressing the enormous payload bloat problem).
the x9.59 financial transaction standard ... some refs
https://www.garlic.com/~lynn/x959.html#x959
just specified requirement for every payment transaction to be authenticated ... and didn't really care whether there was no digital certificate appended ... or whether it was mandated that zero-byte digital certificates were appended.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Does this count as 'computer' folklore? Newsgroups: alt.folklore.computers Date: Fri, 21 Aug 2009 21:34:16 -0400cjt <cheljuba@invalid.invalid> writes:
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Hacker charges also an indictment on PCI, expert says Date: 22 Aug, 2009 Blog: Payment Systems Neworkre:
It has been 15 yrs since the early days of "SSL" & "electronic commerce". It has been almost as long since the online (dialup) home banking were making presentations that they were moving to the internet .... in large part being able to offload onto the ISPs, the enormous customer care problems related to serial port (modems). However, at the same time the online cash-management/business banking were saying that they would never move to the internet because of all the security issues (if any things have gotten worse over the intervening period).
The free give away of serial-port card acceptor devices in the consumer market was only about five years after the move of online banking to the internet .... during which the (ephemeral) financial industry institutional knowledge about the enormous customer care problems with serial port devices appeared to have evaporated. The resulting wide-spread opinion that chipcards aren't practical in the consumer market lingers on to this day.
During the original "electronic commerce" webserver deployments ... webservers based on RDBMS technologies always had larger number of security problems than non-RDBMS based implementations. The issue appears to be "complexity" ... where security problems are almost always proportional to "complexity" ... and RDBMS significantly increase the complexity (not just any single security problem ... but long list of security problems because of the significantly more complex environment).
as to RDBMS ... sort of disclaimer ... lots of past posts regarding
original relational/SQL implementation
https://www.garlic.com/~lynn/submain.html#systemr
old email about Jim departing for Tandem and attempting to palm off
responsibility for consulting on IMS and consulting on System/R with
various (financial) institutions:
https://www.garlic.com/~lynn/2007.html#email801006
https://www.garlic.com/~lynn/2007.html#email801016
and post from last year about get together celebrating Jim
https://www.garlic.com/~lynn/2008p.html#27 Father Of Financial Dataprocessing
for a little other topic drift ... reference to cartes 2002
presentation on the yes card and how trivial it was to
counterfeit payment chipcard
https://web.archive.org/web/20030417083810/http://www.smartcard.co.uk/resources/articles/cartes2002.html
see bottom of the above webpage. It only has oblique reference to the result being worse than magstripe.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: ACP, One of the Oldest Open Source Apps Newsgroups: alt.folklore.computers,bit.listserv.ibm-main Date: Sat, 22 Aug 2009 23:12:22 -0400John A Pershing Jr <pershing@alum.mit.edu> writes:
for 370 two processor cache machines ... they slowed the processor cycle time by ten percent to account for cross-cache signaling ... i.e. each processor ran at .9 of a uniprocessor machine ... or two processor base hardware was 1.8 times that of a uniprocessor machine (actual invalidates and other cache overhead ... plus increased multiprocessor software pathlength might slow system thruput down to 1.5 times or less than that of uniprocessor).
I had done some multiprocessor supervisor slight of hand that got very close to the 1.8 times (nearly straight hardware speed). Also for some of the multiprocessors at HONE ... I was able to get better than two times a uniprocessor operation because of some further slight of hand regarding cache hit ratios (i.e. uniprocessor operation with lots of interrupts would loose cache locality and have increased cache miss rate ... with two processors ... i could play some more games with preserving cache locality execution ... with much higher cache hit rate ... so effectively the MIP thruput rate was better than normally rated for the machines because of improvement in cache hit rate).
3081 being a standard two processor machine (and initially there wasn't going to be any uniprocessors) already had both processors speed slowed by ten percent (to handle the cross-cache signaling).
TPF didn't have multiprocessor support at the time ... so eventually they had to come out with a crippled 3081 only running a single processor and because there wasn't cross-cache signaling going on ... they could bump the processor cycle time up (sort of the reverse of 370 with single processor being normal and two-processor being slowed down ... 3081 had two processor being normal ... and so single processor was speed up) ... for 3083.
There were also various comments that work on 3083 uniprocessor for TPF was because of various clone single processor products (not aggregate as fast as two processors in 3081 but more than any of the older single processor machines offfered by IBM).
from:
http://www.isham-research.co.uk/mips_chart.html
3081k was 14.6 aggregate mips or about 7.3mips/processor ... turning off x-cache signaling should up that to about 8.11mips/processor 3081kx2 was 15.4 aggregate mip rate or about 7.7mips/processor ... turning off x-cache signaling should up that to about 8.6mips/processor 3083jx comes in only at 8.12mips (vanilla 3081k processor with x-cache signaling turned off).
some old email indicates that even 9083 hand-picked only came in marginally faster than 3083 (not even on the order of 8.6 that would have been indicated with 3081kx2). 9083 did have a different I/O microcode load to bias for the typical higher channel loads.
Now to get to 3084 (a pair of 3081s for four processors) there were real tricks ... since each processor cache had to take constant signals from three other processor caches instead of only one other processor cache.
note that the vm370 kernel also ran dat-off supervisor state (and before it, cp67) ... it was only when virtual machine was executing that DAT was turned on.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: ACP, One of the Oldest Open Source Apps Newsgroups: alt.folklore.computers,bit.listserv.ibm-main Date: Sat, 22 Aug 2009 23:47:27 -0400John A Pershing Jr <pershing@alum.mit.edu> writes:
In the move to 3380 ... the total mbytes of data per arm increased by much larger ratio than the ratio for accesses/sec increase (resulting in accesses/sec per mbyte decrease and potentially system thruput decrease). There was a session at SHARE discussing problems with getting datacenter managers to not completely fill every byte of disk space (or at least fill-out with relatively dormant data). There was a semi-facetious suggestion that a special microcode load for 3880 controllers that would significantly cut the number of accessible 3380 cylinders ... and market it as a high performance 3380 at a higher price (getting datacenter managers to pay more for a "reduced-sized", high performance 3380 ... was viewed as a much simpler task than trying to get datacenter managers to only partially fill a 3380).
for some other ACP discussion ... old email by Jim looking at typical
ACP airline res system activity profile
https://www.garlic.com/~lynn/2008i.html#email800325
in this post
https://www.garlic.com/~lynn/2008i.html#39
and followup post mentioning tribute to jim last year
https://www.garlic.com/~lynn/2008i.html#40
referencing some old email
https://www.garlic.com/~lynn/2007.html#email801006
https://www.garlic.com/~lynn/2007.html#email801016
about Jim when he was leaving for Tandem, trying to palm off on me
database consulting with IMS group ... and consulting with various
(financial institution) customers looking at doing relational dbms
Now, part of the name change from ACP to TPF was some number of financial institutions starting to use ACP for financial transactions ... including ATM machine transactions.
There was a large california financial institution in the late 70s that
implemented a customized ATM machine transaction processing on VM370
... and claimed that its VM370 implementation running on 370/158 ...
had higher thruput than ACP doing the some workload on 370/168. The
issue wasn't so much processor consumption ... it was better intelligent
scheduling of disk arms. They had a gimick with transactions coming into
a service virtual machine and then being offloading to transaction
server virtual machines ... where each server virtual machine "owned" a
whole disk and the associated disk arm. They had done extensive study of
ATM transaction patterns ... and tailored disk arm scheduling to
transaction history patterns (something that was much more difficult to
do in ACP environment ... including delaying some transactions in an
attempt to improve disk arm locality of reference). post from last year
with some discussion of the implementation:
https://www.garlic.com/~lynn/2008g.html#14
as mentioned, at some time in the past, my wife had been con'ed into
going to POK to be in charge of loosely-coupled (aka mainframe cluster)
architecture ... while there she did the Peer-Coupled Shared Data
architecture
https://www.garlic.com/~lynn/submain.html#shareddata
which except for IMS hot-standby ... saw very little uptake until sysplex ... which contributed to her not staying long POK. the other problem was battles with the SNA crowd over whether or not SNA had to be used for loosely-coupled/cluster operation ... eventually there was a (temporary) truce where she didn't have to use SNA within the walls of the datacenter (but they had to be used if something crossed the walls of the datacenter)
this references a project I was looking at putting out a pu4/pu5
emulation on Series/1 as a product and quickly upgrading to RIOS
platform
https://www.garlic.com/~lynn/2009l.html#46 SNA: conflicting opinions
and spent some time looking at IMS hot-standby ... they were looking at "fast" fall-over to the hot-standby. However, for some of IMS customer infrastructures with large number of terminals/devices (16k, 32k or more) ... it turned out that having VTAM re-establish the large number of sessions was 90 minutes elapsed time (or more, making IMS hot-standby somewhat mute subject). Part of the problem was that VTAM session initiation was rather expensive in terms of amount of virtual memory ... trying to do that for large number of sessions all at one time ... had VTAM virtual memory requirements quickly far exceed available real storage configurations, resulting in a whole lot of virtual memory paging activity (contributing to the increase in elapsed time).
the emulated pu4/pu5 implementation maintained session information in
distributed (no-single-point-of-failure) environment ... and would
impersonate everything as cross-domain operation ... so handling
fall-over for something like IMS hot-standby environments wasn't as much
of a problem (although there was still issues with the VTAM on the
hot-standby machine ... looked at creating/maintaining shadow
sessions). Part of old presentation made to SNA TRB in this decade old
post
https://www.garlic.com/~lynn/99.html#67
of course, later we did ha/cmp product
https://www.garlic.com/~lynn/subtopic.html#hacmp
both high availability and cluster scale-up ... I got the name changed
from ha/6000 to ha/cmp to reflect the cluster scalup part. some
old cluster scale-up related email
https://www.garlic.com/~lynn/lhwemail.html#medusa
however, as referenced in this post about jan92 meeting
https://www.garlic.com/~lynn/95.html#13
sortly after the jan92 meeting the cluster scale-up part was transferred and we were told we couldn't work on anything with more than four processors ... however, it was too late to change the "CMP" part of the name.
while out marketing ha/cmp, I had coined the terms geographic
survivability and disaster survivability (to differentiate
from disaster/recovery) ... some past posts
https://www.garlic.com/~lynn/submain.html#available
and was also asked to write a section for the corporate continuous availability strategy document. however, both rochester and POK objected to the level of availability described and the section was pulled.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: ACP, One of the Oldest Open Source Apps Newsgroups: alt.folklore.computers,bit.listserv.ibm-main Date: Sun, 23 Aug 2009 09:25:50 -0400Anne & Lynn Wheeler <lynn@garlic.com> writes:
although not as much as transition from 370/158 integrated channel microcode transition for 3031 ... with two 158 engines ... one running solely 370 m'code and a 2nd 158 engine as "channel director" running only the integrated channel m'code (and no 370 m'code) ... aka going from 370/158 with single engine ... executing both the 370 m'code and integrated channel m'code (taking cycles away from 370 m'code).
in the wake of the demise of future system project, there was mad rush to get 370 products back into the product pipeline. launching XA/3081 was going to take 6-7 yrs ... so 303x was stop-gap effort. 3031 was two 158 engines reconfigured one with 370 m'code and channel director (2nd 158 engine with only integrated channel m'code). 3032 as 168-3 reconfigured to use "channel director" as external channels. 3033 started out with 168-3 wiring diagram mapped to faster chip technology. The chips were about 20% ... so 3033 started out only going to be about 20% faster than 168-3. However, the technology had something like ten times the circuits per chip (mostly going unused). During the 3033 development some highly critical sections were redesigned to take advantage of more onchip operations ... getting 3033 up to about 50% faster than 168-3.
initial 3081D started out with per processor MIP rate about the same as 3033 uniprocessor. it wasn't until 3081k that 3081 started to show significant higher MIP rate.
recent post with old 370/158, 3031 (and 4341) benchmark:
https://www.garlic.com/~lynn/2009d.html#54 mainframe performance
i.e.
158 3031 4341 Rain 45.64 secs 37.03 secs 36.21 secs Rain4 43.90 secs 36.61 secs 36.13 secs... snip ...
some past email mentioning oncoming 4341s
https://www.garlic.com/~lynn/lhwemail.html#4341
part of the issue was that POK was seriously suprised at the speed of customer 4341 uptake (as well as internally ... some bldgs were running short of conference rooms as they were converted to 4341 rooms). also ... with "degradation" in disk relative system thruput ... real memory sizes used for caching to compensate for "slow" disks as well as having more channels ... was major issue. because of "slow" disks, 3033 was severely memory and channel constrained (because disk speeds weren't scaling up as fast as processor speed).
some large customers were finding that they could deploy hundreds of 4341 w/o requiring expensive raised floor (sort of precursor to PC distributed environment). in the datacenter, a cluster of six 4341s with six channels each and 16mbytes each, had lower aggregate cost and higher aggregate MIP rate than 3033 (aggregate 36 channels to 16 for 3033 and possibly aggregate 96mbytes to 16mbytes for 3033). a two-processor 3033 SMP could possibly have 1.5 times the effective MIP rate (and twice the channels) ... but 16mbyte limitation was even greater constraint.
there was eventually a hack for 3033 that allowed attachment of 32mbytes
real storage ... creating more of balanced configuration (help to compensate
for declining disk relative system thruput). the hack allowed additional
16mbytes only for virtual pages since instruction addressing was still
limited to 24bits (16mbytes). Basically two unused bits were scavenged
in the page table entry and used for "real page number" (allowing for
effective real page addressing up to 64mbytes). IDALs could be used for
effectivel real page addressing above the 16mbyte line. There were all
sorts of things that required bringing virtual page from above the line
to below the 16mbyte line. Originally the proposal was to write the page
out to disk and do a special page operation bringing it back in below
the line. I gave them a special hack that fiddled two special page table
entries & in DAT-on mode, moved a page below the 16mbyte line (w/o
needing i/o operation). some past posts discussing the subject:
https://www.garlic.com/~lynn/2000d.html#82 "all-out" vs less aggressive designs (was: Re: 36 to 32 bit transition)
https://www.garlic.com/~lynn/2005.html#34 increasing addressable memory via paged memory?
https://www.garlic.com/~lynn/2005p.html#19 address space
https://www.garlic.com/~lynn/2005q.html#30 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2005u.html#44 POWER6 on zSeries?
https://www.garlic.com/~lynn/2006g.html#8 The Pankian Metaphor
https://www.garlic.com/~lynn/2007g.html#59 IBM to the PCM market(the sky is falling!!!the sky is falling!!)
https://www.garlic.com/~lynn/2008f.html#12 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical
https://www.garlic.com/~lynn/2009g.html#71 308x Processors - was "Mainframe articles"
I had started writing things in the 70s about the problems that the
relative system disk thruput slowdown was having on overall system
thruput. part of this could be traced back to my undergraduate days
working on dyanmic adaptive resource management ... and something I
called "scheduling to the bottleneck". in the early 80s some disk
division executives took exception to most recent version
... partially reproduced in this old post
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
and they assigned the division performance group to refute the
statements. after a few weeks, the group came back and effectively
said that I had slightly understated the issue (I hadn't included
channel reconnect and "RPS-miss" as part of the comparison). Their
analysis then turned into a SHARE (63) presenation (B874) with
recommendations on how to configure disks to improve system thruput. a
couple older posts on the subject:
https://www.garlic.com/~lynn/2006f.html#3 using 3390 mod-9s
https://www.garlic.com/~lynn/2006o.html#68 DASD Response Time (on antique 3390?)
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Hacker charges also an indictment on PCI, expert says Date: 23 Aug, 2009 Blog: Payment Systems Neworkre:
During the large US chipcard pilot deployment ... I tried to explain the yes card attack (this was before the cartes 2002 presentation) .. and they would repeatedly dismiss the vulnerability ... making statements that they had precluded such problems by configuring valid issued cards to go offline less frequently (or to go online more frequently)
It took large number of repeated descriptions to get across that the yes card attack wasn't an attack on valid/issued cards ... it was attack on the POS terminals (most of the people involved seemed to have an extremely myopic preoccupation with the chipcards w/o regard to what was happening in the rest of the infrastructure) .... whatever they did in how they configured valid cards had no effect on the yes card attack on POS terminals.
It was after it started to dawn on them that the counterfeit yes card attack was a POS terminal attack (not an attack on valid cards) ... that the deployment appeared to evaporate w/o a trace.
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970