Article Title Questions

Sunday, March 18, 2012 0 comments



Hello Training Series Member,

Here is the next edition of the EzineArticles.com Article Writing and Marketing: Article Title Training Series.

Ask yourself: "Does my article title entice the reader to ask a question?"
E.g. "Why?," "How?," "Who?," "Where?," "When?," etc.

After reading your article title, a question should appear in the reader's mind. Your article body is where you deliver the answer to that question.

This is a powerful concept because you have just engaged the mind of your reader, moving them from a passive to an active state. In the active state, your reader is more likely to find value in your content and thus visit your website.

Never underestimate the power of "How To" article titles. There is clearly a huge demand for articles that answer common problems in an easy-to-read "How To" format.

Don't be afraid to offer your readers more questions they should ask themselves when evaluating the topic of your article. Questions become highly relevant answers to your readers because your questions act like a personal coaching session.

Avoid the shocking question that forces you to stretch the truth to answer the question in your article body. Instead, be creative and interesting with your article title question.

The next edition of the EzineArticles.com Article Writing and Marketing: Article Title Training Series will offer tips to narrow your article title's focus.

Cost-Benefit Analysis of Cloud Computing versus Desktop Grids page 4

0 comments

GAMBAR

4.4 Completion times
The volatility and heterogeneity in VC systems makes
timely completion of task batches challenging. BOINC
has a number of mechanisms for ensuring time completion.
For example, project scientists can soft deadlines for tasks.
When the soft deadline of a task approaches, the local client
scheduler will increase the task’s priority relative to oth-
ers. In addition, the server-side scheduler uses the deadline
for determining timeouts, i.e., when another task instance
should be sent out.
With these mechanisms, task completion is usually done
at a high success rate. For example, in the World Commu-
nity Grid project (a non-profit project for volunteer com-
puting), 96.1% of tasks met their deadline out of 227,485
tasks [24].
Nevertheless, VC users should expect a stretch (defined
by the amount of time spent by the job in the system and
its execution time) of at least 5 according to our simula-
tion results in [21]. This is because the task deadlines are
usually high relative to the amount of actual work. The
median project deadlines are around 9 days, where as the
execution time per task is about 3.67 hours on a dedicated
3GHz host [6]. Recently, there has been promising results
in using predictive models for achieving fast turnaround
time [2, 19, 14]
By contrast, on EC2, platform construction takes a few
minutes to deploy an image. This assumes that the platform
is not overloaded. As resources are dedicated, application
deployment is instantaneous, and task execution and com-
pletion are relatively constant and low.

TABEL


5 Cloud Computing Costs
We present an overview of Amazon’s cloud services and
pricing [13] to be used in our calculations. Amazon has two
relevant cloud computing services. First, Amazon offers the
Elastic Computing Cloud service.EC2 charges each hour an
instance is running, and it offers instances with different
compute power and memory. The pricing for EC2 is shown
in Tables 1 and 2.
Second, in conjunction with EC2, Amazon offers the
Elastic Block Store (EBS) service. This provides reliable
and persistent storage with high IO performance. EBS
charges per GB of storage and per million IO transac-
tions. The pricing for EBS is shown in Table 3. Ama-
zon also offers the Simple Storage Service (S3). This
service provides access through web services to persistent
data stored in buckets (one-level of directories) along with
meta-data (key/value pairs). S3 charges per GB of stor-
age and HTTP requests concerning it. PersistentFS offers
a POSIX-compliant file system using S3 and is arguably
cheaper than EBS for mainly read-only data. However, for
volunteer computing projects, the cost difference between
S3/PersistentFS and EBS is not significant and does not
change our conclusions. Thus we assume all storage oc-
curs on EBS. We do not consider costs of snapshots, i.e.,
EBS volume backups to Amazon’s S3.

TABEL

Cost-Benefit Analysis of Cloud Computing versus Desktop Grids page 3

0 comments

• Completion. The unavailability or slowness of vol-
unteer resources near the end of the computation can
stretch task completion times.
In the subsections below, we quantify the performance
costs of each of these stages.
4.1 Execution: Cloud Equivalence
We compute the cloud equivalence of a VC system. We
answer the following question: how many nodes in a VC
system are required to provide the same compute power in
FLOPS of a small dedicated EC2 instance? This is similar
to the notion of cluster equivalence in [20]. However, in
that study the equivalence was computed for an enterprise
(versus Internet) desktop grid, and limited to a few hundred
machines.
To compute this cloud equivalence ratio, we used the
statistics for SETI@home presented in [26]. We find that
the average FLOPS of SETI@home is about 514.798 Ter-
aFLOPS. We assume a replication factor of 3 (required
for result verification and time task completion), which is
quite conservative as projects such as World Community
Grid [29] use levels 50% lower. Thus, the effective FLOPS
is about 171.599 TeraFLOPS.
Moreover, there are about 318,380 hosts that were active
in the last 60 days. This means on average, each host con-
tributes 0.539 GigaFLOPS. We ran the Whetstone bench-
mark by means of the BOINC client on an EC2 small in-
stance, and the result was about about 1.528 GigaFLOPS
for the single core allocated on an AMD Opteron Processor
2218 HE. Thus, the cluster equivalence is about 2.83 active
volunteer hosts / 1 dedicated small EC2 instance.
4.2 Platform construction
We compute how long it takes on average for new hosts
to register with a project. We used a trace of registration
time of SETI@home between April 1, 20007 to January 31,
2009. We found the mean rate of registration to be about
351 volunteer hosts per day. We normalize this rate accord-
ing to the cloud equivalence (2.83), giving about 124 cloud
instances per day.
Figure 1 shows how much time it takes before a certain
number of cloud nodes and compute power is reached. For
example, we find that is takes about 7.8 days to achieve a
platform equivalent to 1,000 cloud nodes (1.5 TeraFLOPS),
2.7 months for 10,000 cloud nodes (15.3 TeraFLOPS), and
2.24 years for 100,000 cloud nodes (152.8 TeraFLOPS).
Note this is a best-case scenario as the rates were de-
termined from an extremely popular project, SETI@home.
While we used the mean rate to plot Figure 1, the rate varies
greatly over time. We computed the mean rate per day over
week, month, and quarter intervals. While the mean rate
was roughly the same, the coefficient of variation was as
high as 0.83.
In fact, the rate depends on several factors, such as the
level of publicity for the project. Clearly, the rate of regis-
tration can plateau for some projects. Also, the calculations
did not include the limited lifetimes of some of the nodes.

GAMBAR



4.3 Application Deployment
Assuming a system in steady state, the time to send out
all tasks in a batch can be lengthy as clients use a pull
method for retrieving tasks, and clients only connect to the
server periodically.
Here we summarize the work of Heien et al. [18] where
the authors determined the time to deploy a batch of tasks.
In particular, the authors found that:
L = TQ
P
(1)
L is the time frame during which tasks are distributed, P
is the number of clients, and Q is 1.2 × the number of tasks.
T is the reconnection period, which is a parameter specified
by the project scientist to the client denoting the time that
must expire before it reconnects to the server. By default, in
the BOINC VC system, T is six hours.
Figure 2 shows the time required to assign all tasks in
a batch, assuming a replication factor of 3. We consider
three batch sizes of 100, 1000, and 10000 tasks (and with
replication, a total of 300, 3000, and 30000 tasks). For ex-
ample, deploying a batch with 100, 1000, and 10000 unique
tasks over a platform with 10,000 cloud nodes (or equiva-
lently 28300 volunteer nodes) would take 4.6, 45.8, or 458
minutes, respectively.

Cost-Benefit Analysis of Cloud Computing versus Desktop Grids page 2

0 comments

and computation requirements, and current cloud comput-
ing and storage pricing of Amazon’s Elastic Compute Cloud
(EC2) [4].
2 Related Work
In [23], the authors consider the Amazon data storage
service S3 for scientific data-intensive applications. They
conclude that monetary costs are high as the storage ser-
vice groups availability, durability, and access performance
together. By contrast, data-intensive applications often do
not always need all of these three features at once. In [28],
the authors determine the performance of MPI applications
over Amazon’s EC2. They find that the performance for
MPI distributed-memory parallel programs and OpenMP
shared-memory parallel programs over the cloud is signif-
icantly worse than in "out-of-cloud" clusters. In [17], the
author conducts a general cost-benefit analysis of clouds.
However, no specific type of scientific application is con-
sidered. In [9], the authors determine the cost of running a
scientific workflow over a cloud. They find that the com-
putational costs outweighed storage costs for their Mon-
tage application. By contrast, for comparison, we consider
a different type of application (namely batches of embar-
rassingly parallel and compute-intensive tasks) and cost-
effective platform consisting of volunteered resources.
It is well-known that ISP’s have always offered similar
services as clouds but at much lower rates [17]. However,
ISP’s resources are not as scalable (according to variable
workloads), configurable nor as reliable [17]. The ability
to adapt to workload changes is important as server work-
loads can change rapidly. Configurability is important to
suit project programming and application needs. Reliability
is important for project scientists to receive and access re-
sults, and also to project volunteers as they prefer to receive
credit for computation as soon as possible. Thus, we do not
consider ISP’s in our analysis.
3 Cloud versus Volunteer Computing
Both cloud and volunteer computing have similar princi-
ples, such as transparency. On both platforms, one submits
tasks without needing to know the exact resource on which
it will execute. For this reason, definitions of cloud comput-
ing have included VC systems [30]. However, in practice,
the cloud computing infrastructures differ from volunteer
computing platforms throughout the hardware and software
stack. From the perspective of the user, there are two main
differences, namely configurability (and thus homogeneity),
and quality-of-service.
Clouds present a configurable environment in terms of
the OS and software stack with the Xen virtual machine [3]
forming the basis of EC2. The use of VM’s in VC systems
is still an active research topic [7, 16]. So while clouds can
offer a homogeneous resource pool, the heterogeneity of
VC hardware (e.g. general purpose CPU’s, GPU’s, the Cell
Processor of the Sony PlayStation 3) and operating system
(90% areWindows) is not transparent to VC application de-
velopers.
Clouds also provide higher quality-of-service than VC
systems. Cloud resources appear dedicated, and there is
no risk of preemption. Many cloud computing platforms,
such as Amazon’s EC2, report several "nine’s" in terms of
reliability. Cloud infrastructures consist of large-scale cen-
tralized compute servers with network-attached storage at
several international locations. The infrastructures are ac-
cessed through services such as S3 also provide high-level
web services for data management. By contrast, guarantees
for data access or storage, or computation across volatile In-
ternet resources over low-bandwidth and high-latency links
is still an open and actively pursued research problem.
3.1 Apples to Apples
Given these dramatic differences between cloud and VC
computing, it begs the question of how to compare these
systems. We compare the cost-benefits of cloud versus
volunteer computing from the perspective of an embar-
rassingly parallel and compute-intensive application.
This is a useful for the following reasons. EC2 is popular
computing environment for task parallel batch jobs. This is
evident by the fact that Condor is used extensively on EC2,
and there are even corporations that specialize in Condor
deployments over EC2 [8]. An alternative platform (that is
perhaps cheaper and provides higher performance) for these
tasks could be a VC system. Conversely, VC scientists may
consider hosting servers or even task execution on EC2, de-
pending on the cost-benefits.
4 Platform Performance Trade-offs
Here we describe the performance costs for an applica-
tion executed over a VC system, and compare them to EC2
costs. Roughly the stages of a VC project and application
are the following:
• Platform construction. One must wait and gather
enough volunteers in the project.
• Application deployment. As VC systems have a client-
server pull architecture, an application will be de-
ployed only as fast as the rate of client requests.
• Execution. During execution, we must consider the
effective compute rate of the platform given resources’
volatility and task redundancy.

Article Title Questions

Wednesday, June 1, 2011 1 comments

Cost-Benefit Analysis
of Cloud Computing versus Desktop Grids
Derrick Kondo1, Bahman Javadi1, Paul Malecot1, Franck Cappello1, David P. Anderson2
1INRIA, France,2UC Berkeley, USA
Contact author: derrick.kondo@inria.fr

Abstract
Cloud Computing has taken commercial computing by storm. However, adoption of cloud computing platforms and services by the scientific community is in its infancy as the performance and monetary cost-benefits for scientific applications are not perfectly clear. This is especially true for desktop grids (aka volunteer computing) applications. We compare and contrast the performance and monetary cost-benefits of clouds for desktop grid applications, ranging in computational size and storage. We address the
following questions: (i) What are the performance trade offs in using one platform over the other? (ii) What are the specific resource requirements and monetary costs of creating and deploying applications on each platform? (iii) In light of those monetary and performance cost-benefits, how do these platforms compare? (iv) Can cloud computing platforms be used in combination with desktop grids to improve cost-effectiveness even further? We examine those questions using performance measurements and monetary expenses of real desktop grids and the Amazon elastic compute cloud.

1 Introduction
Computational platforms have traditionally included clusters, and computational Grids. Recently, two cost-
efficient and powerful platforms have emerged, namely cloud and volunteer computing (aka desktop grids).
Cloud Computing has taken commercial computing by storm. Cloud computing platforms provide easy access to a company’s high-performance computing and storage infrastructure through web services. With cloud computing, the aim is to hide the complexity of IT infrastructure management from its users. At the same time, cloud computing platforms provide massive scalability, 99.999% reliability, high performance, and specifiable configurability. These capabilities are provided at relatively low costs compared to dedicated infrastructures. Volunteer Computing (VC) platforms are another cost efficient and powerful platform that use volunteered resources over the Internet. For over a decade, VC platforms have been one of the largest and most powerful distributed computing systems on the planet, offering a high return on investment for applications from a wide range of scientific domains (including computational biology, climate prediction, and high-energy physics). Since 2000, over 100 scientific publications (in the world’s most prestigious scien-
tific journals such as Science and Nature) [15, 5] have documented real scientific results achieved on this platform.
Adoption of cloud computing platforms and services by the scientific community is in its infancy as the performance and monetary cost-benefits for scientific applications are not perfectly clear. This is especially true for volunteer computing applications. In this paper, we compare and contrast the performance and monetary cost-benefits of clouds for volunteer computing applications, ranging in size and storage. We examine and answer the following questions:
• What are the performance trade-offs in using one platform over the other in terms platform construction, application deployment, compute rates, and completion times?
• What are the specific resource requirements and monetary costs of creating and deploying applications on
each platform?
• Given those performance and monetary cost-benefits, how do VC platforms compare with cloud platforms?
• Can cloud computing platforms be used in combination with VC systems to improve cost-effectiveness
even further?
To help answer these questions, we use server measurements and financial expenses collected from several
real VC projects, with emphasis on projects that use the BOINC [1] VC middleware. With this data, we use backof-the-envelope calculations based on current VC storage

What's your secret?

0 comments

Maybe you have some uncommon way of dealing with an everyday problem, or you've developed a faster or cheaper way of doing something. Either way, you know something most other people don't know. In fact, as an expert in your niche, you probably have quite a few secrets, but you may not think of them like that.

Strangely, in the case of article writing, the "secrets" you share are the ones that define you the most.

Your secrets are the unique concepts and facts that you have about your niche. You share some of your best secrets every once in a while in articles. Of course, you save some of those secrets to be shared elsewhere, but article writing is a great avenue to share your expertise.

This article template is a great way to share those secrets in a succinct, guided way.

Just follow these steps:

Choose a Secret - Whatever you choose needs to be something that the general population doesn't know already. It could be a startling fact or little-known process that you're ready and willing to uncover.

Write a Captivating Title – Don't give away your secret in the title. Use this part of the article to build interest in the topic, but don't over-hype the secret. If you exaggerate the secret in the title and under-deliver, readers will notice.

Share the Secret – In the introductory paragraphs, share the secret and explain it as well as you can. If you're outlining a secret recipe or step-by-step process, use an ordered list to organize the steps. If your secret is better told in paragraphs, take that route.

Explain Why It's Not Widely-Known – Up until this point, your secret has been just that - a secret. Think about why it's such a special piece of information or why it hasn't become common knowledge and share your thoughts.

Tell Why You're Sharing It – Whether you're just in the mood to help people out or you want to announce a better/faster/cheaper solution to everyday problems, explain to your readers why you've decided to share the secret.

Recap the Secret – Now that the secret is out, recap what it is, what it means and why you're sharing it in the conclusion of the article. Highlight the main points and summarize to conclude the article.

This secret-sharing article template is a great way to connect with your readers on a new, more personal level. Feel free to play with less formal writing when working with this template. Some authors pretend that they're actually "whispering" the secret in their writing. That can be a different way of showing your lighter side as a writer.

Use this article template today to boost your credibility and reputation as an author with one-of-a-kind secrets about your niche.

 
rama 8log's © 2011 | Designed by Interline Cruises, in collaboration with Interline Discounts, Travel Tips and Movie Tickets