Presentation is loading. Please wait.

Presentation is loading. Please wait.

INFN and GARR networks 1980-1998 CNAF activity Antonia Ghiselli Bologna, 18 April 2013 50th anniversary of CNAF 50th anniversary of CNAF 1.

Similar presentations


Presentation on theme: "INFN and GARR networks 1980-1998 CNAF activity Antonia Ghiselli Bologna, 18 April 2013 50th anniversary of CNAF 50th anniversary of CNAF 1."— Presentation transcript:

1 INFN and GARR networks 1980-1998 CNAF activity Antonia Ghiselli Bologna, 18 April 2013 50th anniversary of CNAF 50th anniversary of CNAF 1

2 Summary Birth of INFNet and the gateway’s role Birth of INFNet and the gateway’s role Scenario within INFN and the national computing centers Scenario within INFN and the national computing centers Scenario at CERN Scenario at CERN Network Gateways Network Gateways Data communication technology evolution Data communication technology evolution GARR collaboration GARR collaboration From multiprotocol to Internet From multiprotocol to Internet OSI activity OSI activity TCP/IP transition TCP/IP transition Network related activities Network related activities Network staff Network staff 2

3 The birth of INFNet INFN computing scenario during the seventies: INFN computing scenario during the seventies: Bubble chamber data analysis promoted the INFN strategy to develop local computing facilities, Bubble chamber data analysis promoted the INFN strategy to develop local computing facilities, in parallel, INFN supported the creation of academic computing centers. in parallel, INFN supported the creation of academic computing centers. By the end of the seventies in the INFN units: By the end of the seventies in the INFN units: many mini computers were in use for data acquisition and computation (mainly DEC pdp-11) many mini computers were in use for data acquisition and computation (mainly DEC pdp-11) Remote job entry stations to access Computing Centers Main Frames for huge CPU demanding Jobs Remote job entry stations to access Computing Centers Main Frames for huge CPU demanding Jobs 3

4 4 Remote Job Entry Station

5 First Remote data connections: RJE station links to main frames Cilea (UNIVAC) CINECA (CDC) CNUCE (IBM) CCI (UNIVAC) CSATA (IBM) MI TS ISS LNF RM BA PD BO cnaf PV PI TO GE 5 RJE station

6 Data communication delivery from DEC DECnet DECnet RJE emulators to CDC, Univac and IBM RJE emulators to CDC, Univac and IBM DECnet test, based on dial-up connections between PDP-11, by INFN (Roma, MI, PV and LNF) with good results DECnet test, based on dial-up connections between PDP-11, by INFN (Roma, MI, PV and LNF) with good results RJE station were substituted by the emulators running on PDP-11 RJE station were substituted by the emulators running on PDP-11 INFN invented the gateways between DECnet and the RJE emulators, keystones of the INFNet topology INFN invented the gateways between DECnet and the RJE emulators, keystones of the INFNet topology CNAF involved in GW to CDC and IBM CNAF involved in GW to CDC and IBM 6

7 7 DECnet C C C GWGW

8 First INFNet topology 1980-81 Cilea (UNIVAC) CINECA (CDC) CNUCE (IBM) CCI (UNIVAC) CSATA (IBM) MI GW TS ISS LNF RM GW BA PD CNAF GW PV PI GW TO GE Link speed = 9.6Kbps 8

9 International link to CERN CERN data communication scenario: CERN data communication scenario: CERNet, (CERN-developed packet switching LAN protocol) CERNet, (CERN-developed packet switching LAN protocol) INDEX system: a Circuit Switching Network for connecting terminals to host computers INDEX system: a Circuit Switching Network for connecting terminals to host computers X.25 setup for external connections (ISO/OSI CONS standard implemented by PTTs) X.25 setup for external connections (ISO/OSI CONS standard implemented by PTTs) 1983 leased line at 9.6kbps CNAF-CERN, connected to an INFN DECnet computer at CERN, CERNGW 1983 leased line at 9.6kbps CNAF-CERN, connected to an INFN DECnet computer at CERN, CERNGW GW for file trasfer DECnet-CERNet, by Roma-LNF collaboration GW for file trasfer DECnet-CERNet, by Roma-LNF collaboration GW for interactive access from DECnet to CERN-IBM, by CNAF and Bologna collaboration GW for interactive access from DECnet to CERN-IBM, by CNAF and Bologna collaboration 9

10 Network gateways by CNAF Gateway based on protocol conversion between DECnet and the different Emulators to access main frames: Gateway based on protocol conversion between DECnet and the different Emulators to access main frames: DECnet/MUX200 for batch and interactive access to CDC6600 from DECnet nodes. DECnet/MUX200 for batch and interactive access to CDC6600 from DECnet nodes. DECnet/3271 Protocol Emulator for 3270 remote Terminal access to IBM Systems from DECnet nodes (IBM in cineca, cilea, cnuce, cern, slac). DECnet/3271 Protocol Emulator for 3270 remote Terminal access to IBM Systems from DECnet nodes (IBM in cineca, cilea, cnuce, cern, slac). DECnet remote login to CERN IBM via gw to Wylbur bridge (developed by a CNAF and Bologna collaboration) DECnet remote login to CERN IBM via gw to Wylbur bridge (developed by a CNAF and Bologna collaboration) 10

11 Gateways for mailing Gateways for mailing cnaf-Trieste collaboration Mailing systems (~1990) Mailing systems (~1990) VMS mail on top of DECnet VMS mail on top of DECnet RSCS for EARN/BITNET RSCS for EARN/BITNET PSI mail over X.25 PSI mail over X.25 SMTP (Simple Mail Transport Protocol) in Internet SMTP (Simple Mail Transport Protocol) in Internet X.400, standard OSI (ISO, CCITT) X.400, standard OSI (ISO, CCITT) GIVEME (General Interface on VMS for Electronic Mail Exchange) GIVEME (General Interface on VMS for Electronic Mail Exchange) GIVEME2, based on a distributed architecture GIVEME2, based on a distributed architecture 11

12 Other International links 1985, 9.6bps leased line CNAF-FermiLab 1985, 9.6bps leased line CNAF-FermiLab Based on DECnet protocols Based on DECnet protocols 1988 leased line at 14.4kbps CNAF - DESY(Hamburg) 1988 leased line at 14.4kbps CNAF - DESY(Hamburg) Based on DECnet over X.25 Based on DECnet over X.25 1988 an other line at 14.4 kbps to CERN for X.25 1988 an other line at 14.4 kbps to CERN for X.25 -  DECnet became soon the most wide network used within HEP and Space Physics communities in EU, USA Canada and Japan (HEPnet and SPAN (NASA in USA and ESA in Europe)). -  DECnet became soon the most wide network used within HEP and Space Physics communities in EU, USA Canada and Japan (HEPnet and SPAN (NASA in USA and ESA in Europe)). DECnet services: File transfer and Remote File Access, remote login, real time mailing, remote job submit, phone ( some of them not yet available on internet). DECnet services: File transfer and Remote File Access, remote login, real time mailing, remote job submit, phone ( some of them not yet available on internet). 12

13 Communication technology evolution 80-89 slowly link speed increase : from 9.6 to 14.4 kbps, up to 64kbps 80-89 slowly link speed increase : from 9.6 to 14.4 kbps, up to 64kbps To increase throughput was needed to mesh the topology To increase throughput was needed to mesh the topology 1989 first 2Mbps link CNAF-CERN and then in Italy (first european 2M to cern) 1989 first 2Mbps link CNAF-CERN and then in Italy (first european 2M to cern) 89-96: only 2Mbps lines 89-96: only 2Mbps lines TDM to allocate different bandwidth for each protocol TDM to allocate different bandwidth for each protocol Frame Relay and Cell Relay allowing bandwidth optimization Frame Relay and Cell Relay allowing bandwidth optimization meshed topology to increase throughput meshed topology to increase throughput 1997 pilot service at 34Mbps, via ATM (cell switching) 1997 pilot service at 34Mbps, via ATM (cell switching) 13

14 INFNet and GARR GARR starts as a group from the major research Institute, Universities and the Computing Centers (INFN, CNR, ENEA, CINECA, CILEA, CSATA). It will become a Consortium in 2001. GARR starts as a group from the major research Institute, Universities and the Computing Centers (INFN, CNR, ENEA, CINECA, CILEA, CSATA). It will become a Consortium in 2001. GARR collaboration starts to work in 88 th GARR collaboration starts to work in 88 th on a plan for a backbone based on 2Mbps links on a plan for a backbone based on 2Mbps links 4 network protocols: SNA, DECnet, X.25 and TCP/IP 4 network protocols: SNA, DECnet, X.25 and TCP/IP TDM technology used to allocate static bandwidth to the 4 protocols TDM technology used to allocate static bandwidth to the 4 protocols first GARR backbone 89-90, CNAF coordinates the technical project and the implementation first GARR backbone 89-90, CNAF coordinates the technical project and the implementation 14

15 15 TDM unit running the “Mbps line to CERN

16 16 TDM unit running the 2Mbps line to CERN

17 INFNet and GARR (GARR-2) 1992 new transfer mode technologies appear: Frame Relay and Cell Relay. 1992 new transfer mode technologies appear: Frame Relay and Cell Relay. Frame Relay was ‘data oriented’ and targeted at speeds up to 2Mbps, it was offered by Telecom through a service called C-LAN. The most interesting feature of C-LAN was the possibility to build meshed topologies, via Virtual Circuits. Frame Relay was ‘data oriented’ and targeted at speeds up to 2Mbps, it was offered by Telecom through a service called C-LAN. The most interesting feature of C-LAN was the possibility to build meshed topologies, via Virtual Circuits. Cell Relay was Data, Voice and video-oriented and targeted at higher speeds. CNAF tested a new device from StrataCom called IPX, using a proprietary form of cell switching called FastPacket. Cell Relay was Data, Voice and video-oriented and targeted at higher speeds. CNAF tested a new device from StrataCom called IPX, using a proprietary form of cell switching called FastPacket. Both technologies were used in 1995 to build GARR-2 instead of the TDMs: more efficiency, more virtual links at 2Mbps  higher backbone throughput (still multi-protocols). Both technologies were used in 1995 to build GARR-2 instead of the TDMs: more efficiency, more virtual links at 2Mbps  higher backbone throughput (still multi-protocols). BGP was introduced as routing protocol between the backbone and the local domain, in order to make the network more stable. BGP was introduced as routing protocol between the backbone and the local domain, in order to make the network more stable. CNAF coordinated the project design and implementation. CNAF coordinated the project design and implementation. 17

18 IXI (International X.25 Infrastructure) from the COSINE project, since 1990-94 (Cooperation for Open System Interconnection Networking in Europe) IXI (International X.25 Infrastructure) from the COSINE project, since 1990-94 (Cooperation for Open System Interconnection Networking in Europe) Since 83 HEPnet, the european HEP network (X.25, DECnet, TCP/IP, SNA) Since 83 HEPnet, the european HEP network (X.25, DECnet, TCP/IP, SNA) 93-97 EUROPAnet, the European Research Network before TEN-34 93-97 EUROPAnet, the European Research Network before TEN-34 Esnet (Energy Science network),, DECnet and TCP/IP, USA Esnet (Energy Science network),, DECnet and TCP/IP, USA NSFnet (National Science Foundation network), TCP/IP,BITnet USA NSFnet (National Science Foundation network), TCP/IP,BITnet USA SPAN (Space Physics Analysis Network) NASA, ESA, Astronet in IT (DECnet) SPAN (Space Physics Analysis Network) NASA, ESA, Astronet in IT (DECnet) DUBNA, Joint Institute for Nuclear Research DUBNA, Joint Institute for Nuclear Research 18 INFNet/GARR 1993

19 GARR – 2 in 96 th before pilot GARR-B GARR – 2 in 96 th before pilot GARR-B Pre-ATMswitches in LNGS, Roma, CNAF and CERN Pre-ATMswitches in LNGS, Roma, CNAF and CERN Several 2Mbps PVC via C-LAN/Frame Relay Several 2Mbps PVC via C-LAN/Frame Relay New link to Europanet via CNAF/Milano BT New link to Europanet via CNAF/Milano BT 19

20 INFNet and GARR-B Important step forward for high speed data communication was the introduction of the optical fiber and voice & data integration. Important step forward for high speed data communication was the introduction of the optical fiber and voice & data integration. ATM (Asynchronous Transfer Mode) is a cell switching technology for very high speed network integrating voice, video and data. ATM (Asynchronous Transfer Mode) is a cell switching technology for very high speed network integrating voice, video and data. 1996: CNAF, in collaboration with INFN units, planned and implemented a geographical pilot project to test ATM, based on Telecom Italia backbone at 34Mbps, SIRIUS, and on the European network JAMES. 1996: CNAF, in collaboration with INFN units, planned and implemented a geographical pilot project to test ATM, based on Telecom Italia backbone at 34Mbps, SIRIUS, and on the European network JAMES. The national testbed was called Pilot-GARR-B, and went into production in 97th before GARR-B. On the basis of the ‘pilot’, CNAF defined the ‘Executive Projects’ of GARR-B. The national testbed was called Pilot-GARR-B, and went into production in 97th before GARR-B. On the basis of the ‘pilot’, CNAF defined the ‘Executive Projects’ of GARR-B. The European testbed brought to TEN-34 The European testbed brought to TEN-34 20

21 From multiprotocols to Internet OSI model implementations (CCITT then ITU-T (International Telecommunication Union - Telecommunication Standardization Bureau)..) OSI model implementations (CCITT then ITU-T (International Telecommunication Union - Telecommunication Standardization Bureau)..) X.25 CONS ( network layer service, requiring a circuit to be established before data is transmitted ): X.25 CONS ( network layer service, requiring a circuit to be established before data is transmitted ): The public data network was the common name given to the international collection of X.25 providers. The public data network was the common name given to the international collection of X.25 providers.public data networkpublic data network Their combined network had large global coverage during the 1980s and into the 1990. There was DATAPAC – by Bell Canada, TRANSPAC – The French public data network, AUSTPAC – An Australian public X.25 network, TelePAC in Swiss, ITAPAC in Italy by PTTs, IXI (International X.25 Infrastructure) from the COSINE project, Coloured Book in UK were developed on top of X.25. Their combined network had large global coverage during the 1980s and into the 1990. There was DATAPAC – by Bell Canada, TRANSPAC – The French public data network, AUSTPAC – An Australian public X.25 network, TelePAC in Swiss, ITAPAC in Italy by PTTs, IXI (International X.25 Infrastructure) from the COSINE project, Coloured Book in UK were developed on top of X.25.DATAPACTRANSPACAUSTPACDATAPACTRANSPACAUSTPAC but but Development of high speed LAN with different characteristics (multi- access, broadcast, connectionless) made difficult the integration with X25. Development of high speed LAN with different characteristics (multi- access, broadcast, connectionless) made difficult the integration with X25. Very few OSI applications (X.400 mail and remote login) Very few OSI applications (X.400 mail and remote login) no longer used after the 95 6h. no longer used after the 95 6h. 21

22 From multiprotocols to Internet DECnet ph.5 CLNS, the OSI Network Layer datagram service DECnet ph.5 CLNS, the OSI Network Layer datagram service DECnet ph.5 was the Digital answer to DECnet ph.4 address exhaustion. Based on the OSI/CLNS recommendations and a new 160bit addressing. DECnet ph.5 was the Digital answer to DECnet ph.4 address exhaustion. Based on the OSI/CLNS recommendations and a new 160bit addressing. CNAF/INFN studied and tested DECnet ph.5 within the world wide HEPnet and SPAN, since DECnet was the widest protocol within these communities. CNAF/INFN studied and tested DECnet ph.5 within the world wide HEPnet and SPAN, since DECnet was the widest protocol within these communities. but but The release process took too long and in the mean time……… The release process took too long and in the mean time……… 22

23 GARR-B and Internet TCP/IP made big progress in stability, reliability TCP/IP made big progress in stability, reliability UNIX – like OSs became established UNIX – like OSs became established UNIX clusters are more used within INFN UNIX clusters are more used within INFN World Wide Web protocols on top of tcp/ip World Wide Web protocols on top of tcp/ip All this led to choose TCP/IP as unique network protocol suite, and it became a de facto standard. All this led to choose TCP/IP as unique network protocol suite, and it became a de facto standard. The routing topology for GARR-B proposed by CNAF has been adopted by GARR. The routing topology for GARR-B proposed by CNAF has been adopted by GARR. 23

24 4 main transport nodes interconnected with links at 34Mbps 4 main transport nodes interconnected with links at 34Mbps 16 access points (PoP) 16 access points (PoP) 30 INFN sites connected over 250 30 INFN sites connected over 250 GARR-B plan for INFN, 1996 24

25 Network related activities Mailing coordination, GW service, mailing list service Mailing coordination, GW service, mailing list service VideoConference at national and international level VideoConference at national and international level First WEB sites development for INFN (top level), experiments, projects…. First WEB sites development for INFN (top level), experiments, projects…. Information systems: usenet news, whois(RIPEdb), X.500, DNS Information systems: usenet news, whois(RIPEdb), X.500, DNS Distributed computing services (AFS) Distributed computing services (AFS) Software and hardware overall coordination and support Software and hardware overall coordination and support Network topology administration and operation Network topology administration and operation 25

26 Videoconference system within INFN (slide by Stefano Zani) 26 CNAF Started supporting videoconferencing applications for INFN community in first 90s CNAF Started supporting videoconferencing applications for INFN community in first 90s The Videoconferencing systems tested and supported could be listed here: The Videoconferencing systems tested and supported could be listed here: Mbone (Multicast backbone) VIC and VAT (Main Mrouter was at CNAF) Mbone (Multicast backbone) VIC and VAT (Main Mrouter was at CNAF) CuSeeme ( Cornell U) CuSeeme ( Cornell U) H.320 ISDN Video Codecs (Aethra, Vcon, Picturetel) H.320 ISDN Video Codecs (Aethra, Vcon, Picturetel) H.320/H.323 over IP. INFN central MCU at CNAF (from 1996  2013) H.320/H.323 over IP. INFN central MCU at CNAF (from 1996  2013) Ezenia  Picturetel  Accord/Polycom  Codian Ezenia  Picturetel  Accord/Polycom  Codian

27 Videoconference VRVS (Virtual Room Vieconferencing System) CERN/CALTECH and VRVS (Virtual Room Vieconferencing System) CERN/CALTECH and Its evolution EVO and now SeeVogh Research Its evolution EVO and now SeeVogh Research (Italian Reflector hosted at CNAF (From 1998  2013) (Italian Reflector hosted at CNAF (From 1998  2013) CERN now is using Vidyo and the Italian Vidyo Router is hosted at CNAF CERN now is using Vidyo and the Italian Vidyo Router is hosted at CNAF INFN Phone Conferencing system Asterisk Based (Managed at CNAF) INFN Phone Conferencing system Asterisk Based (Managed at CNAF) (slide by Stefano Zani) 27

28 Network staff in 80th-90th At the beginning a small group decided to work on network activity with a plan not properly in agreement with the Director ‘s plan. The future success of Digital computers and of DECnet was not clear to all INFN and in particular to CNAF. At the beginning a small group decided to work on network activity with a plan not properly in agreement with the Director ‘s plan. The future success of Digital computers and of DECnet was not clear to all INFN and in particular to CNAF. There was a strong cohesion within the group and between the group and the INFN site computing managers and with the Computing Commission. There was a strong cohesion within the group and between the group and the INFN site computing managers and with the Computing Commission. We were feeling that the plan would have been successful !!! We were feeling that the plan would have been successful !!! The group people was: Massimo Cinque, Antonia Ghiselli, Gianni Govoni, Pietro Matteuzzi, Giulia Vita Finzi, Umberto Zanotti The group people was: Massimo Cinque, Antonia Ghiselli, Gianni Govoni, Pietro Matteuzzi, Giulia Vita Finzi, Umberto Zanotti 28

29 Network staff after 1990 New entries in time order: New entries in time order: Cristina Vistoli Cristina Vistoli Davide Salomoni Davide Salomoni Elisabetta Ghermandi Elisabetta Ghermandi Claudio Demaria Claudio Demaria Stefano Zani Stefano Zani Tiziana Ferrari Tiziana Ferrari Several degree and PhD thesis Several degree and PhD thesis Cnaf directors Cnaf directors Up to 85, Massimo Masetti Up to 85, Massimo Masetti 86-92, Ettore Remiddi (cnaf became a National Center for Computer Science) 86-92, Ettore Remiddi (cnaf became a National Center for Computer Science) 92-98, Enzo Valente 92-98, Enzo Valente 29

30 People in 92 after the CNAF reconfiguration Allegro Martina, admin Allegro Martina, admin Castelvetri Attilio Castelvetri Attilio Cinque Massimo Cinque Massimo Demaria Claudio Demaria Claudio Fonti Luigi Fonti Luigi Ghermandi Elisabetta Ghermandi Elisabetta Ghiselli Antonia Ghiselli Antonia Govoni Giovanni Govoni Giovanni Matteuzzi Pietro Matteuzzi Pietro Pischedda Michela, admin Pischedda Michela, admin Venturi Danilo Venturi Danilo Vistoli M.Cristina Vistoli M.Cristina Vita Finzi Giulia Vita Finzi Giulia Zanotti Umberto Zanotti Umberto 30

31 INFNet Referee in 1995 Mauro Campanella, Milano Mauro Campanella, Milano Danilo D’Isep, Torino Danilo D’Isep, Torino Mauro Dell’Orso, Pisa Mauro Dell’Orso, Pisa Andrea Donati, LNGS Andrea Donati, LNGS Antonia Ghiselli, CNAF, coord. Antonia Ghiselli, CNAF, coord. Fernando Liello, Trieste Fernando Liello, Trieste Mirco Mazzucato, Padova Mirco Mazzucato, Padova Federico Ruggieri, Bari Federico Ruggieri, Bari Davide Salomoni, CNAF Davide Salomoni, CNAF Corrado Salvo, Genova Corrado Salvo, Genova Cristina Vistoli, CNAF Cristina Vistoli, CNAF Enzo Valente, CNAF Director Enzo Valente, CNAF Director Umberto Zanotti, CNAF Umberto Zanotti, CNAF 31

32 The main collaborations within INFN Giovanni Mirabelli, Roma Giovanni Mirabelli, Roma Maria Lorenza Ferrer, LNF Maria Lorenza Ferrer, LNF Paolo Capiluppi, Bologna Paolo Capiluppi, Bologna Leila Bodini, Milano Leila Bodini, Milano Roberto Gomezel, Trieste Roberto Gomezel, Trieste Riccardo Fantechi, Pisa Riccardo Fantechi, Pisa Elio Calligarich e Giorgio Cecchet, Pavia Elio Calligarich e Giorgio Cecchet, Pavia Maurizio Morando, Padova Maurizio Morando, Padova Paolo Lo Re e Paolo Mastroserio, Napoli Paolo Lo Re e Paolo Mastroserio, Napoli Claudio Allocchio, Trieste Claudio Allocchio, Trieste Luciano Gaido, Torino Luciano Gaido, Torino Alessandro Brunengo, Genova Alessandro Brunengo, Genova 32

33 Collaborations with GARR people CINECA: Marco Lanzarini, Gabriele Neri, Angelo De Florio, Alessandro Asson CINECA: Marco Lanzarini, Gabriele Neri, Angelo De Florio, Alessandro Asson Cilea: Andrea Mattasoglio, Antonio Cantore, Gianpiero Limongiello Cilea: Andrea Mattasoglio, Antonio Cantore, Gianpiero Limongiello CNR: Marco Sommani, Blasco Bonito, Gennai, Daniele Vannozzi CNR: Marco Sommani, Blasco Bonito, Gennai, Daniele Vannozzi Unipi: Giuseppe Attardi Unipi: Giuseppe Attardi UNIBO: Renzo Davoli UNIBO: Renzo Davoli 33

34 After 98th GARR set up its own organization and the network group decided to move its interest on network applications and mainly on Distributed Computing, as it will be described in the next presentation. GARR set up its own organization and the network group decided to move its interest on network applications and mainly on Distributed Computing, as it will be described in the next presentation. 34

35 references 1987, A DECnet/IBM Gateway for 3270 Remote Terminal Access to IBM Systems from VAX Nodes of a DECNET Network, A.Ghiselli, Computer Physiscs Communications 45(447-453) 1987, A DECnet/IBM Gateway for 3270 Remote Terminal Access to IBM Systems from VAX Nodes of a DECNET Network, A.Ghiselli, Computer Physiscs Communications 45(447-453) 1988, Studio di fattibilita’ di una infrastruttura di comunicazione ad alta velocita’ per la rete della ricerca, A.Ghiselli, A.Adrualdi, G.Neri, E.Valente, INFN/TC-88/24 1988, Studio di fattibilita’ di una infrastruttura di comunicazione ad alta velocita’ per la rete della ricerca, A.Ghiselli, A.Adrualdi, G.Neri, E.Valente, INFN/TC-88/24 1989, Interworking in INFnet, A.Ghiselli, Computer Networks and ISDN System 17(1989) 371-375 1989, Interworking in INFnet, A.Ghiselli, Computer Networks and ISDN System 17(1989) 371-375 1990, The INFN GIVEME 987 gateway, Computer Networks and ISDN systems, 19, 255-260 1990, The INFN GIVEME 987 gateway, Computer Networks and ISDN systems, 19, 255-260 1992, Pre-ATM Switch in INFNet and GARR, C.Demaria, A.Ghiselli, C.Vistoli (internal document) 1992, Pre-ATM Switch in INFNet and GARR, C.Demaria, A.Ghiselli, C.Vistoli (internal document) 1995, GARR-2, Infrastruttura, Topologia e Management, A.DeFlorio, A.Ghiselli, A.Mattasoglio, M.Sommani 1995, GARR-2, Infrastruttura, Topologia e Management, A.DeFlorio, A.Ghiselli, A.Mattasoglio, M.Sommani 1998, Sperimentazioni ATM per l’utilizzo di Reti LAN e WAN ad Alta Velocita’, S.Alborghetti et al, INFN/TC – 98/02 (from CNAF: T.Ferrari, A.Ghiselli, P.Matteuzzi, C.Vistoli, S.Zani) 1998, Sperimentazioni ATM per l’utilizzo di Reti LAN e WAN ad Alta Velocita’, S.Alborghetti et al, INFN/TC – 98/02 (from CNAF: T.Ferrari, A.Ghiselli, P.Matteuzzi, C.Vistoli, S.Zani) 35

36 references Computing at CERN in the LEP Era, may 1983 Computing at CERN in the LEP Era, may 1983 Piano esecutivo della rete GARR-B, M.C.Vistoli, INFN/TC 98/24 Piano esecutivo della rete GARR-B, M.C.Vistoli, INFN/TC 98/24 36


Download ppt "INFN and GARR networks 1980-1998 CNAF activity Antonia Ghiselli Bologna, 18 April 2013 50th anniversary of CNAF 50th anniversary of CNAF 1."

Similar presentations


Ads by Google