Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Distributed Computing Economics Jim Gray Microsoft Research Talk at SD Forum:

Similar presentations


Presentation on theme: "1 Distributed Computing Economics Jim Gray Microsoft Research Talk at SD Forum:"— Presentation transcript:

1 1 Distributed Computing Economics Jim Gray Microsoft Research gray@microsoft.comgray@microsoft.com Talk at SD Forum: http://www.sdforum.org/http://www.sdforum.org/ 18 Sept 2003, PARC Auditorium, Palo Alto, CA. Slides at: http://research.microsoft.com/~gray/talkshttp://research.microsoft.com/~gray/talks

2 2 Two (?) Talks Distributed Computing Economics Online Science (what I have been doing).

3 3 Distributed Computing Economics Why is Seti@Home a great idea? Why is Napster a great deal? Why is the Computational Grid uneconomic? When does computing on demand work? What is the “right” level of abstraction? Is the Access Grid the real killer app? Based on: Distributed Computing Economics, Jim Gray, Microsoft Tech report, March 2003, MSR-TR-2003-24 http://research.microsoft.com/research/pubs/view.aspx?tr_id=655

4 4 Computing is Free Computers cost 1k$ (if you shop right) (yes, there are 1μ$ to 1M$ computers, but..) So 1 cpu day == 1$ (computers last 3 years) If you pay the phone bill Internet bandwidth costs 50 … 500$/mbps/m (not including routers and management). So 1GB costs 1$ to send and 1$ to receive Caveat: All numbers rounded to nearest factor of 3.

5 5 Why is Seti@Home a Good Deal? Send 300 KB costs 3e-4$ User computes for ½ day:benefit.5e-1$ ROI: 1500:1

6 6 Seti@Home The worlds most powerful computer 61 TF is sum of top 4 of Top 500. 61 TF is 9x the number 2 system. 61 TF more than the sum of systems 2..10 Seti@Home http://setiathome.ssl.berkeley.edu/totals.html 20 May 2003 TotalLast 24 Hours Users4,493,7311,900 Results received886 M1,4 M Total CPU time 1.5 M years 1,514 years Floating Point Operations 3 E+21 ops 3 zeta ops 5 E+18 FLOPS/day 61.3 TeraFLOPs

7 7 Anecdotal Evidence, Everywhere I go I see Beowulfs Clusters of PCs (or high-slice-price micros) True: I have not visited Earth Simulator, but… Google, MSN, Hotmail, Yahoo, NCBI, FNAL, Los Alamos, Cal Tech, MIT, Berkeley, NARO, Smithsonian, Wisconsin, eBay, Amazon.com, Schwab, Citicorp, Beijing, Cern, BaBar, NCSA, Cornell, UCSD, and of course NASA and Cal Tech skip

8 8 Why was Napster a Good Deal? Send 5 MB costs 5e-3$ ½ a penny per song Both sender and receiver can afford it. Same logic powers web sites (Yahoo!...): –1e-3$/page view advertising revenue –1e-5$/page view cost of serving web page –100:1 ROI

9 9 The Cost of Computing: Computers are NOT free! IBM, HP, Dell make billions Capital Cost of a TpcC system is mostly storage and storage software (database) IBM 32 cpu, 512 GB ram 2,500 disks, 43 TB (680,613 tpmC @ 11.13 $/tpmc available 11/08/03) http://www.tpc.org/results/individual_results/IBM/IBMp690es_05092003.pdf http://www.tpc.org/results/individual_results/IBM/IBMp690es_05092003.pdf A 7.5M$ super-computer Total Data Center Cost: 40% capital & facilities 60% staff (includes app development)

10 10 Computing Equivalents 1 $ buys 1 day of cpu time 4 GB (fast) ram for a day 1 GB of network bandwidth 1 GB of disk storage for 3 years 10 M database accesses 10 TB of disk access (sequential) 10 TB of LAN bandwidth (bulk) 10 KWhrs == 4 days of computer time Depreciating over 3 years, and there are about 1k days in 3 years.

11 11 Some consequences Beowulf networking is 10,000x cheaper than WAN networking factors of 10 5 matter. The cheapest and fastest way to move Terabytes cross country is sneakernet. 24 hours = 4 MB/s 50$ shipping vs 1,000$ wan cost. Sending 10PB CERN data via network is silly: buy disk bricks in Geneva, fill them, ship them. TeraScale SneakerNet: Using Inexpensive Disks for Backup, Archiving, and Data Exchange Jim Gray; Wyman Chong; Tom Barclay; Alex Szalay; Jan vandenBerg Microsoft Technical Report may 2002, MSR-TR-2002-54 http://research.microsoft.com/research/pubs/view.aspx?tr_id=569

12 12 How Do You Move A Terabyte? 14 minutes6172001,920,0009600OC 1922.2 hours1000Gbps 1 day100100 Mpbs 14 hours97631649,000155OC3 2 days2,01065128,00043T3 2 months2,4698001,2001.5T1 5 months360117 50 0.6Home DSL 6 years3,0861,000400.04 Home phone Time/TB $/TB Sent $/Mbps Rent $/month Speed Mbps Context Source: TeraScale Sneakernet, Microsoft Research, Jim Gray et. all

13 13 Computational Grid Economics To the extent that computational grid is like Seti@Home or ZetaNet or Folding@home or… it is a great thing The extent that the computational grid is MPI or data analysis, it fails on economic grounds: move the programs to the data, not the data to the programs. The Internet is NOT the cpu backplane. An alternate reality: Nearly free networking –Telcos go bankrupt an price=cost=0 –Taxpayers pay your phone bill so price=0 and telcos BIG government subsidy

14 14 When to Export a Task IF instruction density > 100,000 instructions/byte AND remote computer is free (costs you nothing) THEN ROI > 0 ELSE ROI < 0

15 15 Computing on Demand Was called outsourcing / service bureaus in my youth. CSC and IBM did it. It is not a new way of doing things: think payroll. Payroll is standard outsource. Now Hotmail, Salesforce.com,Oracle.com,…. Works for standard apps. COD works for commoditized services. Airlines outsource reservations. Banks outsource ATMs. But Amazon, Amex, Wal-Mart, eTrade, eBay... Can’t outsource their core competence.

16 16 What do you Outsource? Disk blocks? Files ? SQL ? RPC ? Application? Ø Xdrive SkyServer TerraServer AOL, Google, Hotmail, Yahoo!, ….

17 17 What’s the right abstraction level for Internet Scale Distributed Computing? Disk block? No too low. File? No too low. Database? No too low. Application? Yes, of course. –Blast search –Google search –Send/Get eMail –Portals that federate astronomy archives (http://skyQuery.Net/)http://skyQuery.Net/ Web Services (.NET, EJB, OGSA) give this abstraction level.

18 18 Access Grid Q: What comes after the telephone? A: eMail? A: Instant messaging? Both seem retro: text & emotons. Access Grid could revolutionize human communication. But, it needs a new idea. Q: What comes after the telephone?

19 19 Distributed Computing Economics Why is Seti@Home a great idea? Why is Napster a great deal? Why is the Computational Grid uneconomic When does computing on demand work? What is the “right” level of abstraction? Is the Access Grid the real killer app? Based on: Distributed Computing Economics, Jim Gray, Microsoft Tech report, March 2003, MSR-TR-2003-24 http://research.microsoft.com/research/pubs/view.aspx?tr_id=655

20 20 Two (?) Talks Distributed Computing Economics Online Science (what I have been doing). –The World Wide Telescope –I have been looking for a distributed DB for most of my career. –I think I found one! (sort of).

21 21 The World Wide Telescope I have been looking for a distributed DB for most of my career. I think I found one! (sort of).

22 22 The Evolution of Science Observational Science –Scientist gathers data by direct observation –Scientist analyzes Information Analytical Science –Scientist builds analytical model –Makes predictions. Computational Science –Simulate analytical model –Validate model and makes predictions Science - Informatics Information Exploration Science Information captured by instruments Or Information generated by simulator –Processed by software –Placed in a database / files –Scientist analyzes database / files

23 23 Computational Science Evolves Historically, Computational Science = simulation. New emphasis on informatics: –Capturing, –Organizing, –Summarizing, –Analyzing, –Visualizing Largely driven by observational science, but also needed by simulations. Too soon to say if comp-X and X-info will unify or compete. BaBar, Stanford Space Telescope P&E Gene Sequencer From http://www.genome.uci.edu/

24 24 Both comp-X and X-info Generating Petabytes Comp-Science generating an Information avalanche comp-chem, comp-physics, comp-bio, comp-astro, comp-linguistics, comp-music, comp-entertainment, comp-warfare Science-Info dealing with Information avalanche bio-info, astro-info, text-info,

25 25 Information Avalanche Stories Turbulence: 100 TB simulation then mine the Information BaBar: Grows 1TB/day 2/3 simulation Information 1/3 observational Information CERN: LHC will generate 1GB/s 10 PB/y VLBA (NRAO) generates 1GB/s today NCBI: “only ½ TB” but doubling each year very rich dataset. Pixar: 100 TB/Movie

26 26 Astro-Info World Wide Telescope http://www.astro.caltech.edu/nvoconf/ http://www.voforum.org/ http://www.astro.caltech.edu/nvoconf/ http://www.voforum.org/ Premise: Most data is (or could be online) Internet is the world’s best telescope: –It has data on every part of the sky –In every measured spectral band: optical, x-ray, radio.. –As deep as the best instruments (2 years ago). –It is up when you are up. The “seeing” is always great (no working at night, no clouds no moons no..). –It’s a smart telescope: links objects and data to literature on them.

27 27 Why Astronomy Data? It has no commercial value –No privacy concerns –Can freely share results with others –Great for experimenting with algorithms It is real and well documented – High-dimensional data (with confidence intervals) – Spatial data – Temporal data Many different instruments from many different places and many different times But, it’s the same universe so comparisons make sense & are interesting. Federation is a goal There is a lot of it (petabytes) Great sandbox for data mining algorithms –Can share cross company –University researchers Great way to teach both Astronomy and Computational Science IRAS 100  ROSAT ~keV DSS Optical 2MASS 2  IRAS 25  NVSS 20cm WENSS 92cm GB 6cm

28 28 What X-info Needs from us (cs) (not drawn to scale) Science Data & Questions Scientists Database To store data Execute Queries Plumbers Data Mining Algorithms Miners Question & Answer Visualization Tools

29 29 Show Maria’s 5-minute PPT SDSS Image Cutout slide show by Maria A. Nieto-Santisteban of JHU http://www.research.microsoft.com/~Gray/talks/FDIS_ImgCutoutPresentation.ppt http://www.research.microsoft.com/~Gray/talks/FDIS_ImgCutoutPresentation.ppt

30 30 Data Access is hitting a wall FTP and GREP are not adequate You can GREP 1 MB in a second You can GREP 1 GB in a minute You can GREP 1 TB in 2 days You can GREP 1 PB in 3 years. Oh!, and 1PB ~5,000 disks At some point you need indices to limit search parallel data search and analysis This is where databases can help You can FTP 1 MB in 1 sec You can FTP 1 GB / min (= 1 $/GB) … 2 days and 1K$ … 3 years and 1M$

31 31 Next-Generation Data Analysis Looking for –Needles in haystacks – the Higgs particle –Haystacks: Dark matter, Dark energy Needles are easier than haystacks Global statistics have poor scaling –Correlation functions are N 2, likelihood techniques N 3 As data and processing grow at same rate, we can only keep up with N logN A way out? –Discard notion of optimal (data is fuzzy, answers are approximate) –Don’t assume infinite computational resources or memory Requires combination of statistics & computer science

32 32 Analysis and Databases Statistical analysis deals with –Creating uniform samples –data filtering & censoring bad data –Assembling subsets –Estimating completeness –Counting and building histograms –Generating Monte-Carlo subsets –Likelihood calculations –Hypothesis testing Traditionally these are performed on files Most of these tasks are much better done inside a database close to the data. Move Mohamed to the mountain, not the mountain to Mohamed.

33 33 Goal: Easy Data Publication & Access Augment FTP with data query: Return intelligent data subsets Make it easy to –Publish: Record structured data –Find: Find data anywhere in the network Get the subset you need –Explore datasets interactively Realistic goal: –Make it as easy as publishing/reading web sites today.

34 34 Federation Data Federations of Web Services Massive datasets live near their owners: –Near the instrument’s software pipeline –Near the applications –Near data knowledge and curation –Super Computer centers become Super Data Centers Each Archive publishes a web service –Schema: documents the data –Methods on objects (queries) Scientists get “personalized” extracts Uniform access to multiple Archives –A common global schema

35 35 Web Services: The Key? Web SERVER: –Given a url + parameters –Returns a web page (often dynamic) Web SERVICE: –Given a XML document (soap msg) –Returns an XML document –Tools make this look like an RPC. F(x,y,z) returns (u, v, w) –Distributed objects for the web. –+ naming, discovery, security,.. Internet-scale distributed computing Your program Data In your address space Web Service soap object in xml Your program Web Server http Web page

36 36 The Challenge This has failed several times before– understand why. Develop –Common data models (schemas), –Common interfaces (class/method) Build useful prototypes (nodes and portals) Create a community that uses the prototypes and evolves the prototypes.

37 37 Grid and Web Services Synergy I believe the Grid will be many web services IETF standards Provide –Naming –Authorization / Security / Privacy –Distributed Objects Discovery, Definition, Invocation, Object Model –Higher level services: workflow, transactions, DB,.. Synergy: commercial Internet & Grid tools

38 38 Some Interesting Things We are Doing in SDSS (what’s new) SkyServer is “done.” Now it is 99% perspiration to load 25 TB (many times) and manage it. I’m using it as a research vehicle to explore new DB ideas. Others are cloning it for other surveys. Some doing DB2 & Oracle variants.

39 39 SkyServer Overview (10 min) 10 minute SkyServer tour –Pixel space http://skyserver.sdss.org/en/ –Record space: http://skyserver.sdss.org/en/tools/explore/obj.asp?id=2255030989160697 –Doc space: Ned –Set space: –Web & Query Logs: –Dr1 WebService You can download (thanks to Cathan Cook ) –Data + Database code: –Website: Data Mining the SDSS SkyServer Database MSR-TR-2002-01Data Mining the SDSS SkyServer Database select top 10 * from weblog..weblog where yy = 2003 and mm=7 and dd =25 order by seq desc select top 10 * from weblog..sqlLog order by theTime Desc http://skyserver.pha.jhu.edu/dr1/en/tools/chart/navi.asp http://research.microsoft.com/~gray/SDSS/personal_skyserver.htm

40 40 Cutout Service (10 min) A typical web service Show it Show WSDL Show fixing a bug Rush through code. You can download it. Maria A. Nieto-Santisteban did most of this (Alex and I started it) http://research.microsoft.com/~gray/SDSS/personal_skyserver.htm

41 41 SkyQuery: http://skyquery.net/ http://skyquery.net/ Distributed Query tool using a set of web services Four astronomy archives from Pasadena, Chicago, Baltimore, Cambridge (England). Feasibility study, built in 6 weeks –Tanu Malik (JHU CS grad student) –Tamas Budavari (JHU astro postdoc) –With help from Szalay, Thakar, Gray Implemented in C# and.NET Allows queries like: SELECT o.objId, o.r, o.type, t.objId FROM SDSS:PhotoPrimary o, TWOMASS:PhotoPrimary t WHERE XMATCH(o,t)<3.5 AND AREA(181.3,-0.76,6.5) AND o.type=3 and (o.I - t.m_j)>2

42 42 2MASS INT SDSS FIRST SkyQuery Portal Image Cutout SkyQuery Structure Each SkyNode publishes –Schema Web Service –Database Web Service Portal is –Plans Query (2 phase) –Integrates answers –Is itself a web service

43 43 2MASS INT SDSS FIRST SkyQuery Portal Image Cutout SkyQuery and The Grid This is a DataGrid It works today It is challenging for OGSA-DAIS (hello world in OGSI-DAI is complex) SkyQuery is being used as a vehicle to explore OGSA and DAIS requirements.

44 44 2MASS INT SDSS FIRST SkyQuery Portal Image Cutout MyDB added to SkyQuery Let users add personal DB 1GB for now. Use it as a workbook. Online and batch queries. Moves analysis to the data Users can cooperate (share MyDB) Still exploring this MyDB

45 45 Some Database Topics Sparse tables: column vs row store tag and index tables pivot Maplist (cross apply) Dealing with bad statistics: Object Relational has arrived.

46 46 Column Store Pyramid Users see fat base tables (universal relation) Define popular columns index tag table 10% ~ 100 columns Make many skinny indices 1% ~ 10 columns Query optimizer picks right plan Automate definition & use Fast read, slow insert/update Data warehouse Note: prior to Yukon, index had 16 column limit. A bane of my existence. Simpl e Typical Semi-join Fat quer y Obese query BASE INDICIES TAG

47 47 Examples create table base ( id bigint, f1 int primary key, f2 int, …,f1000 int) create index tag on base (id) include (f1, …, f100) create index skinny on base(f2,…f17) Simpl e Typical Semi-join Fat quer y Obese query BASE INDICIES TAG

48 48 A Semi-Join Example create table fat(a int primary key, b int, c int, fat char (988)) declare @i int, @j int; set @i = 0 again: insert fat values(@i, cast(100*rand() as int), cast (100*rand() as int), ' ') set @i = @i + 1; if (@i < 1000000) goto again create index ab on fat(a,b) create index ac on fat(a,c) dbcc dropcleanbuffers with no_infomsgs select count(*) from fat with(index (0)) where c = b -- Table 'fat'. Scan 3, reads 137,230, CPU : 1.3 s, elapsed 31.1s. dbcc dropcleanbuffers with no_infomsgs select count(*) from fat where b=c -- Table 'fat'. Scan 2, reads: 3,482 CPU 1.1 s, elapsed: 1.4 s. 1GB 8MB b=c 3.4K IO 1.4 sec abac b=c 137 K IO 31 sec

49 49 Moving From Rows to Columns Pivot & UnPivot What if the table is sparse? LDAP has 7 mandatory and 1,000 optional attributes Store row, col, value create table Features (object varchar, attribute varchar, value varchar, primary key (object, attribute)) select * from (featurespivot value on attribute in (year, color) ) as T where object = ‘4PNC450’ Features object attribute value ●●●● 4PNC450 year 2000 4PNC450 color white 4PNC450 make Ford 4PNC450 model Taurus ●●●● T Object year color 4PNC450 2000 white

50 50 Maplist Meets SQL – cross apply Your table-valued function F(a,b,c) returns all objects related to a,b,c. spatial neighbors, sub-assemblies, members of a group, items in a folder,… Apply this function to each row Classic drill-down use outer apply if f() may be null select p.*, q.* from parent as p cross apply f(p.a, p.b, p.c) as q where p.type = 1 p1 f(p1) p2 f(p2) pn f(pn)

51 51 When SQL Optimizer Guesses Wrong, Life is DREADFUL SQL is a non-procedural language. The compiler/optimizer picks the procedure based on statistics. If the stats are wrong or missing…. Bad things happen. Queries can run VERY slowly. Strategy 1: allow users to specify plan. Strategy 2: make the optimizer smarter (and accept hints from the user.)

52 52 An Example of the Problem A query selects some fields of an index and of huge table. Bookmark plan: –look in index for a subset. –Lookup subset in Fat table. This is –great if subset << table. –terrible if subset ~ table. If statistics are wrong, or if predicates not independent, you get the wrong plan. How to fix the statistics? Index Huge table

53 53 A Fix: Let user ask for stats Create Statistics on View(f1,..,fn) Then the optimizer has the right data Picks the right plan. Statistics on Views, C. Galindo-Legaria, M. Josi, F. Waas, M. Wu, VLDB 2003, Q3: Select count(*) from Galaxy where r 0.120 Bookmark: 34 M random IO, 520 minutes Create Statistics on Galaxy(objID ) Scan: 5 M sequential IO 18 minutes Ultimately this should be automated, but for now,… it’s a step in the right direction.

54 54 Object Relational Has Arrived VMs are moving inside the DB Yukon includes Common Language Runtime (Oracle & DB2 have similar mechanisms). So, C++, VB, C# and Java are co-equal with TransactSQL. You can define classes and methods SQL will store the instances Access them via methods You can put your analysis code INSIDE the database. Minimizes data movement. You can’t move petabytes to the client But we will soon have petabyte databases. data code data code +code

55 55 The HTM code body Spatial Data Search The Pre CLR design Transact SQL sp_HTM (20 lines) 469 lines of “glue” looking like: // Get Coordinates param datatype, and param length information of if (srv_paraminfo(pSrvProc, 1, &bType1, &cbMaxLen1, &cbActualLen1, NULL, &fNull1) == FAIL) ErrorExit("srv_paraminfo failed..."); // Is Coordinate param a character string if (bType1 != SRVBIGVARCHAR && bType1 != SRVBIGCHAR && bType1 != SRVVARCHAR && bType1 != SRVCHAR) ErrorExit("Coordinate param should be a string."); // Is Coordinate param non-null if (fNull1 || cbActualLen1 < 1 || cbMaxLen1 <= cbActualLen1) ErrorExit("Coordinate param is null."); // Get pointer to Coordinate param pzCoordinateSpec = (char *) srv_paramdata (pSrvProc, 1); if (pzCoordinateSpec == NULL) ErrorExit("Coordinate param is null."); pzCoordinateSpec[cbActualLen1] = 0; // Get OutputVector datatype, and param length information if (srv_paraminfo(pSrvProc, 2, &bType2, &cbMaxLen2, &cbActualLen2, NULL, &fNull2) == FAIL) ErrorExit("Failed to get type info on HTM Vector param...");

56 56 The “glue” CLR design Discard 450 lines of UGLY code The HTM code body C# SQL sp_HTM (50 lines) using System; using System.Data; using System.Data.SqlServer; using System.Data.SqlTypes; using System.Runtime.InteropServices; namespace HTM { public class HTM_wrapper { [DllImport("SQL_HTM.dll")] static extern unsafe void * xp_HTM_Cover_get (byte *str); public static unsafe void HTM_cover_RS(string input) { // convert the input from Unicode (array of 2 bytes) to an array of bytes (not shown) byte * input; byte * output; // invoke the HTM routine output = (byte *)xp_HTM_Cover_get(input); // Convert the array to a table SqlResultSet outputTable = SqlContext.GetReturnResultSet(); if (output[0] == 'O') {// if Output is “OK” uint c = *(UInt32 *)(s + 4); // cast results as dataset Int64 * r = ( Int64 *)(s + 8); // Int64 r[c-1,2] for (int i = 0; i < c; ++i) { SqlDataRecord newRecord = outputTable.CreateRecord(); newRecord.SetSqlInt64(0, r[0]); newRecord.SetSqlInt64(1, r[1]); r++;r++; outputTable.Insert(newRecord); }} // return outputTable; } } } Thanks!!! To Peter Kukol (who wrote this)

57 57 The Clean CLR design Discard all glue code return array cast as table CREATE ASSEMBLY HTM_A FROM '\\localhost\HTM\HTM.dll' CREATE FUNCTION HTM_cover( @input NVARCHAR(100) ) RETURNS @t TABLE ( HTM_ID_START BIGINT NOT NULL PRIMARY KEY, HTM_ID_END BIGINT NOT NULL ) AS EXTERNAL NAME HTM_A:HTM_NS.HTM_C::HTM_cover using System; using System.Data; using System.Data.Sql; using System.Data.SqlServer; using System.Data.SqlTypes; using System.Runtime.InteropServices; namespace HTM_NS { public class HTM_C { public static Int64[,2] HTM_cover(string input) { // invoke the HTM routine return (Int64[,2]) xp_HTM_Cover(input); // the actual HTM C# or C++ or Java or VB code goes here. } } } Your/My code goes here

58 58 Performance (Beta1) On a 2.2 Ghz Xeon Call a Transact SQL function33μs Call a C# function50μs Table valued function not good in β1 Array (== table) valued function 200 μs + per row 27 μs

59 59 CREATE ASSEMBLY ReturnOneA FROM '\\localhost\C:\ReturnOne.dll' GO CREATE FUNCTION ReturnOne_Int( @input INT) RETURNS INT AS EXTERNAL NAME ReturnOneA:ReturnOneNS.ReturnOneC::ReturnOne_Int GO --------------------------------------------- -- time echo an integer declare @i int, @j int, @cpu_seconds float, @null_loop float declare @start datetime, @end datetime set @j = 0 set @i = 10000 set @start = current_Timestamp while(@i > 0) begin set @j = @j + 1 set @i = @i -1 end set @end = current_Timestamp set @null_loop = datediff(ms, @start,@end) / 10.0 set @i = 10000 set @start = current_Timestamp while(@i > 0) begin select @j = dbo.ReturnOne_Int(@i) set @j = @j + 1 set @i = @i -1 end set @end = current_Timestamp set @cpu_seconds = datediff(ms, @start,@end) / 10.0 - @null_loop print 'average cpu time for 1,000 calls to ReturnOne_Int was ' + str(@cpu_seconds,8,2)+ ' micro seconds' The Code using System; using System.Data; using System.Data.SqlServer; using System.Data.SqlTypes; using System.Runtime.InteropServices; namespace ReturnOneNS { public class ReturnOneC { public static int ReturnOne_Int(int input) { return input; } Function written in C# inside the DB Program in DB in different language (Tsql) calling function

60 60 What Is the Significance? No more inside/outside DB dichotomy. You can put your code near the data. Indeed, we are letting users put personal databases near the data archive. This avoids moving large datasets. Just move questions and answers.

61 61 Meta-Message Trying to fit science data into databases When it does not fit, something is wrong. Look for solutions –Many solutions come from OR extensions –Some are fundamental engine changes More structure in DB Richer operator sets Better statistics

62 62 Two (?) Talks Distributed Computing Economics Online Science (what I have been doing).


Download ppt "1 Distributed Computing Economics Jim Gray Microsoft Research Talk at SD Forum:"

Similar presentations


Ads by Google