Tag Archives: Technology

Cara Setting & Mengaktifkan LAN Port Modem Router HG8245A

Huawei Echolife HG8245A GPON Fiber Optic Modem Router

HUAWEI GPON ONT HG8245A 4FE VOICE
Huawei Echolife HG8245A GPON Fiber Optic Modem Router
Huawei Echolife HG8245A GPON Fiberoptik Modem

Pada dasarnya untuk mengaktifkan LAN ports pada Modem Router HG8245A Huawei untuk Jaringan Telkom Speedy sangat susah-susah gampang, karena kita yang awam tidak mengetahui prosedur yang baik dan benar itu seperti apa dan bagaimana. Pada artikel kali ini saya akan membahas tentang Bagaimana Cara Setting dan Mengaktifkan LAN Ports pada Modem Router Huawei HG8245A.

Namun timbul pertanyaan dalam benak kita tentang maksud dan tujuan serta apa guna dan fungsi LANPort yang akan kita aktifkan pada Modem Router tersebut.

Sebenarnya tujuannya sangat simple yaitu agar semua LAN ports dapat digunakan untuk kebutuhan koneksi jaringan kabel LAN beserta internetnya.

Secara default hanya 2 Port LAN saja yang dapat kita gunakan (enable), dan 2 Port lainnya non-aktif (disable) secara total LAN ports pada Modem Router HG8245A berjumlan 4 Ports.

HG8245A terpasang dengan jaringan kabel Fiber Optic dengan 2 unit UTP Patch Cords utk koneksi Notebook dan PC ke jaringan Telkom Speedy dgn Capacity 10MBPS.

Login ke Router HG8245A

Ketik IP Address dari Router HG8245A – 192.168.100.1 – Maka akan muncul Menu Login HG8245A

Masukan data-2 sbb:

  • Account : telecomadmin
  • Password : [sesuai yg diberikan oleh engineer saat intallation]

Setelah Login proses lengkap maka akan muncul Menu di bawah ini.

Nest Step, Pilih LAN utk memilih LAN ports yg akan diaktifkan

Kemudian pastikan bahwa Check Box pada LAN port semuanya diaktifkan seperti pada gambar di atas. Kemudian Klik ‘Apply’.

Next Step pilih WAN. KliK item “2_INTERNET-R-VID_200“.

Pada menu WAN jangan merubah item-2 yg tercantum pada Menu WAN kecuali Item “Binding Options”. Pastikan Check Box ‘LAN1, LAN2, LAN3 dan LAN4 diaktifkan seperti pada gambag di bawah. Kemudian Klik “Apply”.

Sekarang semua Port LAN pada Modem Router HG8245A Huawei kita dapat berfungsi dan aktif jika kita Connect kabel UTP dari LAN Ports Router (LAN1, LAN2,LAN3, LAN4) ke Komputer dan Laptop maka jaringan Internet sudah dapat dinikmati. Jadi Maximum Anda dapat connect 4 unit devices ke Modem Router. Enjoy

Ini adalah tampilan ‘Ethernet Port Information’ setelah Devices di connect ke Modem Router.

Packaging

Pengertian Cloud Computing, Cara Kerja dan Jenis Layanannya

Pengertian Cloud Computing – Menurut beberapa orang, kata cloud computing mungkin masih kurang familiar. Pada dasarnya cloud computing memang masih jarang digunakan dalam kebutuhan digital sehari-hari. Tetapi, penerapannya sudah banyak dilakukan hanya saja pada komputer server atau pusat. Dan penggunaan cloud computing sendiri akan semakin marak kedepannya karena sistem ini memungkinkan Anda untuk menjalankan suatu aplikasi di beberapa komputer tanpa harus menginstallnya satu per satu.

Ingin tahu lebih lengkap tentang pengertian cloud computing dan lain sebagainya? Kalau begitu langsung saja simak ulasan lengkapnya dibawah ini.

Mengenal Cloud Computing

Cloud computing merupakan sebuah kombinasi pemanfaatan jaringan internet yang mana berfungsi untuk menyimpan berbagai file dalam satu basis data. Pada teknologi ini, data disimpang di server tertentu, begitupula dengan software atau aplikasi lainnya sehingga memungkinkan satu komputer server untuk membagikannya dengan komputer lainnya yang terhubung.

Hal ini akan menghemat biaya operasional juga menghemat waktu karena tidak memerlukan hardisk berkapasitas besar untuk menyimpan setiap file software. Seperti misalnya data microsoft word yang cukup diinstal satu kali pada server lalu bisa digunakan di komputer lainnya tanpa harus repot-repot melakukan penginstalan kembali.

Cara Kerja Sistem Cloud Computing

Cara kerja cloud Computing

Dari pengertian cloud computing di atas, maka kita sudah dapat mengambil kesimpulan seperti apa cara kerja sistem ini. Sistem cloud computing bekerja secara onlin dan terus-menerus tanpa henti melalui jaringan internet. Semua aktifitas akan berpusat di komputer server dan setiap jenis data akan langsung di simpan dan siap pakai kapanpun.

Cara kerja selanjutnya ialah memungkinkan user menjalankan suatu aplikasi dan setiap hal yang dijalankan pada aplikasi tersebut akan kembali tersimpan pada komputer server. End user dan komputer server yang disimpan oleh switch maupun router untuk meng-extend jaringan yang ada. Sehingga kita bisa melihat atau menjalankan kembali aplikasi tersebut dimanapun dan kapanpun.

Mengenal Jenis-jenis Layanan Cloud Computing

  • SAAS (Software as a Service)

Jenis layanan yang satu ini menyediakan aplikasi yang siap pakai yang ditujukan untuk end user atau pengguna terakhir. Sehingga user tidak perlu lagi membuat aplikasi baru. Contoh dari layanan SAAS adalah gmail, twitter, facebook, dan lain sebagainya. Pada layanan tersebut, siapa saja dapat memanfaatkannya untuk berbagai keperluan tanpa harus repot-repot membangun server atau infrastruktur baru.

  • PAAS (Platform as a Service)

Layanan PAAS berfungsi untuk menyewakan tempat kepada para pengguna aplikasi untuk menjalankan aplikasi tersebut. Contohnya ialah seperti penyediaan database, framework, atau sistem operasi lainnya yang merupakan suatu platform untuk menjalankan aplikasi.

Jadi pada layanan ini pengguna tidak perlu melakukan pengecekan atau maintaince pada platform tersebut karena hal tersebut sudah diatur dalam layanan ini. Dan pengguna bisa fokus untuk membangun aplikasi dan mengembangkannya saja. Beberapa layanan yang menyediakan PAAS diantaranya seperti Amazon Web Service, Windows Azure, dan Google App Engine.

  • IAAS (Infrastructure as a Service)

Layanan ini menyediakan infrastruktur IT untuk para pengguna akhir yang mana dalam layanan tersebut berbasis data cloud. Infrastruktur yang disediakan bisa bersifat hardware seperti memori, hardisk, atau jenis server tertentu.

Cloud provider pada layanan ini hanya menyediakan infrastruktur sesuai dengan permintaan pengguna. Jadi bila Anda ingin melakukan upgrade atau penambahan infrastruktur tertentu bisa langsung menghubungi penyedia layanan. Contoh penyedia layanan IAAS seperti Rackspace cloud, Amazon EC2, dan lain sebagainya.

Demikianlah ulasan singkat seputar pengertian cloud computing hingga jenis-jenisnya. Semoga ulasan ini bermanfaat dan dapat menambah wawasan Anda.

Source: robicomp . Posted on December 24, 2018 by rifzan

Mengapa Cloud Computing diperlukan di Masa Sekarang

Cloud Computing, mungkin Anda sudah sering mendengar istilah yang satu ini. Namun, apa itu Cloud Computing atau komputasi awan ini? Istilah Cloud Computing sendiri adalah gabungan pemanfaatan teknologi komputer dan pengembangan berbasis Internet. Pada teknologi komputasi berbasis awan semua data berada dan disimpan di server Internet, begitu juga dengan aplikasi ataupun perangkat lunak yang pada umumnya dibutuhkan pengguna semuanya berada di komputer server.

Dalam waktu belakangan ini, beberapa pihak yang berkepentingan di dalam dunia bisnis telah menyadari bagaimana Cloud Computing sangatlah penting untuk diterapkan kepada lingkungan pekerjaan pada masa kini. Bahkan pada sebuah survei terbaru yang dilakukan oleh KPMG kepada 500 eksekutif didapatkan bahwa 42% dari mereka berpendapat bahwa cara kerja yang fleksibel adalah faktor utama mengapa Cloud Computing dilakukan. Bahkan dalam hasil survei sendiri menyebutkan bahwa 54% dari eksekutif tersebut mengharapkan Cloud Computing dapat meningkatkan produktivitas serta kepuasan dari karyawan mereka. Namun, bagaimana cara Cloud Computing dapat meningkatkan produktivitas karyawan bahkan dalam kurun waktu hanya 2 tahun saja?

Ada 2 faktor yang berperan penting di sini.

  • Yang pertama adalah Cloud Computing yang semakin berkembang pada beberapa waktu terakhir. Pada beberapa tahun sebelumnya, Cloud Computing masih dalam tahap trial yang membuatnya terlihat menarik untuk para top management adalah bagaimana Cloud Computing dapat menghemat biaya pengeluaran perusahaan.
  • Sedangkan faktor keduanya adalah, perekonomian. Walaupun perekonomian pada saat ini terbilang lebih baik dari 2 tahun lalu namun kebutuhan untuk tetap menurunkan biaya perusahaan sangatlah penting.

Teknologi komputasi berbasis Cloud semua data berada dan disimpan di server Internet, begitu juga dengan aplikasi ataupun software yang pada umumnya dibutuhkan pengguna, semuanya berada di komputer server. Dengan kata lain, perusahaan tidak perlu investasi server berbentuk fisik, kita tidak perlu memelihara perangkat server sehingga adanya penurunan biaya pengeluaran perusahaan.

Walaupun sisi ekonomi sudah tidak lagi menjadi faktor yang kuat namun, kalangan enterprise sendiri tertarik dengan fleksibilitas dan kemudahan yang disediakan melalui layanan ini. Itulah mengapa Cloud Computing akan segera menjadi layanan mainstream karena kemudahannya beradaptasi dengan para pekerja yang akan menggunakannya.

Selain itu, ada hal lain yang juga terlibat yakni penggunaan Cloud Computing meningkatkan interaksi dengan klien, mitra bisnis dan supplier. Hasil ini sendiri ditemukan dari hasil jawaban koresponden survei yang 37% di antaranya setuju dengan hal tersebut. Jadi bisa dikatakan ada 3 manfaat utama dari penggunaan Cloud Computing dalam bisnis di sebuah perusahaan adalah meningkatkan kinerja bisnis (73%), pengurangan biaya (73%), dan peningkatan otomasi layanan (72%).

Walaupun begitu, ada pula risiko yang menjadi tantangan tersendiri dari penggunaan Cloud Computing, yakni adanya risiko kehilangan data yang masih menjadi perhatian utama dan harus ditingkatkan dari sisi keamanan komputasi awan. Meskipun begitu, dari hasil survei tersebut juga dikatakan bahwa para responden menunjukkan kekhawatiran akan pencurian kekayaan intelektual yang terjadi di tahun 2012 menunjukkan presentase hingga 78% yang menurun menjadi 50% saja. Sedangkan untuk kekhawatiran akan adanya kehilangan data dan risiko kebocoran privasi juga menurun dari 83% menjadi hanya 53% saja.

Hal ini menunjukkan ada perubahan yang cukup signifikan yang dilakukan para pengelola jasa Cloud Computing yang dapat meyakinkan para pengguna bahwa data mereka aman terlindungi dan tidak akan terjadi pelanggaran. Namun, Anda harus tetap mempelajari sistem kontrak dari Cloud service yang akan Anda gunakan. Pastikan setiap proses yang ada dapat Anda ketahui. Jangan sampai mendapatkan biaya yang sangat murah, namun tidak sesuai dengan kebutuhan bisnis Anda.

Source: crmsindonesia.

Mesin ke mesin

Dari Wikipedia bahasa Indonesia, ensiklopedia bebas

Ilustrasi kerja mesin-ke-mesin.

Mesin-ke-mesin (bahasa Inggris: machine-to-machine, disingkat M2M) adalah sebuah istilah yang mengacu pada peranti keras (device/hardware) yang dapat terhubung dan berkomunikasi satu sama lain tanpa bantuan manusia. Dalam hal ini masing-masing perangkat dapat bertukar informasi atau melakukan suatu pekerjaan lewat hubungan sinyal nirkabel. Penggunaan teknologi M2M dalam kehidupan sehari-hari misalnya sms banking, mesin pendingin yang bisa menceritakan kondisinya sendiri, atau AC rumah yang dapat menyala otomatis apabila ada mobil masuk.Teknologi M2M pertama digunakan oleh telemetri, sebuah perangkat yang berfungsi mengawasi kondisi perangkat-perangkat keras lain dari jarak jauh. Komponen-komponen yang termasuk dalam sistem M2M di antaranya adalah sensor, RFID, Wi-Fi atau segala jenis teknologi bergerak dan seluler.

Teknologi M2M di Indonesia

Teknologi M2M di Indonesia dipopulerkan oleh operator telekomunikasi lokal seperti XL Axiata, Telkomsel, dan Indosat. Layanan M2M yang diterapkan oleh Telkomsel ditargetkan pada sektor perbankan, otomotif, dan rumah pintar. Dengan layanan ini, pengguna dapat menjalankan alat atau mesin lewat perangkat bergerak. Telkomsel telah menggarap M2M sejak tahun 2003 dan pada tahun 2014 telah mendapatkan 1,5 juta pelanggan. M2M juga digarap oleh Indosat sejak tahun 2010, dan pada tahun 2015 mengembangkan solusi M2M pada teknologi GPS, EDC Wireless, dan ATM Wireless. XL Axiata mengembangkan teknologi M2M bekerjasama dengan Ericsson. Sejak dibangun pertama kali pada tahun 2012, XL Axiata telah mengembangkan sekitar 5 jenis layanan M2M dengan total 92 ribu pelanggan yang didominasi oleh kalangan industri.

Machine to machine

From Wikipedia, the free encyclopedia

Machine to machine (commonly abbreviated as M2M) refers to direct communication between devices using any communications channel, including wired and wireless. Machine to machine communication can include industrial instrumentation, enabling a sensor or meter to communicate the information it records (such as temperature, inventory level, etc.) to application software that can use it (for example, adjusting an industrial process based on temperature or placing orders to replenish inventory). Such communication was originally accomplished by having a remote network of machines relay information back to a central hub for analysis, which would then be rerouted into a system like a personal computer.

More recent machine to machine communication has changed into a system of networks that transmits data to personal appliances. The expansion of IP networks around the world has made machine to machine communication quicker and easier while using less power. These networks also allow new business opportunities for consumers and suppliers.

Contents
1 History
1.1 In the 2000s
1.2 In the 2010s
2 Applications
3 Networks in prognostics and health management
4 Open initiatives

History

Wired communication machines have been using signaling to exchange information since the early 20th century. Machine to machine has taken more sophisticated forms since the advent of computer networking automation and predates cellular communication. It has been utilized in applications such as telemetry, industrial, automation, SCADA.

SCADA system – Heat station
The PLC (in an industrial process) controls the flow of cooling water, the SCADA system allows any changes related to the alarm conditions and set points for the flow (such as high temperature, loss of flow, etc) to be recorded and displayed.

Machine to machine devices that combined telephony and computing were first conceptualized by Theodore Paraskevakos while working on his Caller ID system in 1968, later patented in the U.S. in 1973. This system, similar but distinct from the panel call indicator of the 1920s and automatic number identification of the 1940s, which communicated telephone numbers to machines, was the predecessor to what is now caller ID, which communicates numbers to people.

The first caller identification receiver
Processing Chips

After several attempts and experiments, he realized that in order for the telephone to be able to read the caller’s telephone number, it must possess intelligence so he developed the method in which the caller’s number is transmitted to the called receiver’s device. His portable transmitter and receiver were reduced to practice in 1971 in a Boeing facility in Huntsville, Alabama, representing the world’s first working prototypes of caller identification devices (shown at right). They were installed at Peoples’ Telephone Company in Leesburg, Alabama and in Athens, Greece where they were demonstrated to several telephone companies with great success. This method was the basis for modern-day Caller ID technology. He was also the first to introduce the concepts of intelligence, data processing and visual display screens into telephones which gave rise to the smartphone.

In 1977, Paraskevakos started Metretek, Inc. in Melbourne, Florida to conduct commercial automatic meter reading and load management for electrical services which led to the “smart grid” and “smart meter”. To achieve mass appeal, Paraskevakos sought to reduce the size of the transmitter and the time of transmission through telephone lines by creating a single chip processing and transmission method. Motorola was contracted in 1978 to develop and produce the single chip, but the chip was too large for Motorola’s capabilities at that time. As a result, it became two separate chips (shown at right).

While cellular is becoming more common, many machines still use landlines (POTS, DSL, cable) to connect to the IP network. The cellular M2M communications industry emerged in 1995 when Siemens set up a department inside its mobile phones business unit to develop and launch a GSM data module called “M1” based on the Siemens mobile phone S6 for M2M industrial applications, enabling machines to communicate over wireless networks. In October 2000, the modules department formed a separate business unit inside Siemens called “Wireless Modules” which in June 2008 became a standalone company called Cinterion Wireless Modules. The first M1 module was used for early point of sale (POS) terminals, in vehicle telematics, remote monitoring and tracking and tracing applications. Machine to machine technology was first embraced by early implementers such as GM and Hughes Electronics Corporation who realized the benefits and future potential of the technology. By 1997, machine to machine wireless technology became more prevalent and sophisticated as ruggedized modules were developed and launched for the specific needs of different vertical markets such as automotive telematics.

21st century machine to machine data modules have newer features and capabilities such as onboard global positioning (GPS) technology, flexible land grid array surface mounting, embedded machine to machine optimized smart cards (like phone SIMs) known as MIMs or machine to machine identification modules, and embedded Java, an important enabling technology to accelerate the Internet of things (IOT). Another example of an early use is OnStar’s system of communication.

The hardware components of a machine to machine network are manufactured by a few key players. In 1998, Quake Global started designing and manufacturing machine to machine satellite and terrestrial modems. Initially relying heavily on ORBCOMM network for its satellite communication services, Quake Global expanded its telecommunication product offerings by engaging both satellite and terrestrial networks, which gave Quake Global an edge in offering network-neutral products.

In the 2000s

In 2004, Digi International began producing wireless gateways and routers. Shortly after in 2006, Digi purchased Max Stream, the manufacturer of XBee radios. These hardware components allowed users to connect machines no matter how remote their location. Since then, Digi has partnered with several companies to connect hundreds of thousands of devices around the world.

In 2004, Christopher Lowery, a UK telecoms entrepreneur, founded Wyless Group, one of the first Mobile Virtual Network Operators (MVNO) in the M2M space. Operations began in the UK and Lowery published several patents introducing new features in data protection & management, including Fixed IP Addressing combined with Platform Managed Connectivity over VPNs. The company expanded to the US in 2008 and became T-Mobile’s largest partners on both sides of the Atlantic.

In 2006, Machine-to-Machine Intelligence (M2Mi) Corp started work with NASA to develop automated machine to machine intelligence. Automated machine to machine intelligence enables a wide variety of mechanisms including wired or wireless tools, sensors, devices, server computers, robots, spacecraft and grid systems to communicate and exchange information efficiently.

In 2009, AT&T and Jasper Technologies, Inc. entered into an agreement to support the creation of machine to machine devices jointly. They have stated that they will be trying to drive further connectivity between consumer electronics and machine to machine wireless networks, which would create a boost in speed and overall power of such devices. 2009 also saw the introduction of real-time management of GSM and CDMA network services for machine to machine applications with the launch of the PRiSMPro™ Platform from machine to machine network provider KORE Telematics. The platform focused on making multi-network management a critical component for efficiency improvements and cost-savings in machine to machine device and network usage.

Also in 2009, Wyless Group introduced PORTHOS™, its multi-operator, multi-application, device agnostic Open Data Management Platform. The company introduced a new industry definition, Global Network Enabler, comprising customer-facing platform management of networks, devices and applications.

Also in 2009, the Norwegian incumbent Telenor concluded ten years of machine to machine research by setting up two entities serving the upper (services) and lower (connectivity) parts of the value-chain. Telenor Connexion in Sweden draws on Vodafone’s former research capabilities in subsidiary Europolitan and is in Europe’s market for services across such typical markets as logistics, fleet management, car safety, healthcare, and smart metering of electricity consumption. Telenor Objects has a similar role supplying connectivity to machine to machine networks across Europe. In the UK, Business MVNO Abica, commenced trials with Telehealth and Telecare applications which required secure data transit via Private APN and HSPA+/4G LTE connectivity with static IP address.

In the 2010s

In early 2010 in the U.S., AT&T, KPN, Rogers, Telcel / America Movil and Jasper Technologies, Inc. began to work together in the creation of a machine to machine site, which will serve as a hub for developers in the field of machine to machine communication electronics. In January 2011, Aeris Communications, Inc. announced that it is providing machine to machine telematics services for Hyundai Motor Corporation. Partnerships like these make it easier, faster and more cost-efficient for businesses to use machine to machine. In June 2010, mobile messaging operator Tyntec announced the availability of its high-reliability SMS services for M2M applications.

In March 2011, machine to machine network service provider KORE Wireless teamed with Vodafone Group and Iridium Communications Inc., respectively, to make KORE Global Connect network services available via cellular and satellite connectivity in more than 180 countries, with a single point for billing, support, logistics and relationship management. Later that year, KORE acquired Australia-based Mach Communications Pty Ltd. in response to increased M2M demand within Asia-Pacific markets.

In April 2011, Ericsson acquired Telenor Connexion’s machine to machine platform, in an effort to get more technology and know-how in the growing sector.

In August 2011, Ericsson announced that they have successfully completed the asset purchase agreement to acquire Telenor Connexion’s (machine to machine) technology platform.

According to the independent wireless analyst firm Berg Insight, the number of cellular network connections worldwide used for machine to machine communication was 47.7 million in 2008. The company forecasts that the number of machine to machine connections will grow to 187 million by 2014.

A research study from the E-Plus Group shows that in 2010 2.3 million machine to machine smart cards will be in the German market. According to the study, this figure will rise in 2013 to over 5 million smart cards. The main growth driver is segment “tracking and tracing” with an expected average growth rate of 30 percent. The fastest growing M2M segment in Germany, with an average annual growth of 47 percent, will be the consumer electronics segment.

In April 2013, OASIS MQTT standards group is formed with the goal of working on a lightweight publish/subscribe reliable messaging transport protocol suitable for communication in M2M/IoT contexts. IBM and StormMQ chair this standards group and Machine-to-Machine Intelligence (M2Mi) Corp is the secretary.In May 2014, the committee published the MQTT and NIST Cybersecurity Framework Version 1.0 committee note to provide guidance for organizations wishing to deploy MQTT in a way consistent with the NIST Framework for Improving Critical Infrastructure Cybersecurity.

In May 2013, machine to machine network service providers KORE Telematics, Oracle, Deutsche Telekom, Digi International, ORBCOMM and Telit formed the International Machine to Machine Council (IMC). The first trade organization to service the entire machine to machine ecosystem, the IMC aims at making machine to machine ubiquitous by helping companies install and manage the communication between machines.

Applications

Wireless networks that are all interconnected can serve to improve production and efficiency in various areas, including machinery that works on building cars and on letting the developers of products know when certain products need to be taken in for maintenance and for what reason. Such information serves to streamline products that consumers buy and works to keep them all working at highest efficiency.

Commonplace consumer application

Another application is to use wireless technology to monitor systems, such as utility meters. This would allow the owner of the meter to know if certain elements have been tampered with, which serves as a quality method to stop fraud.[citation needed] In Quebec, Rogers will connect Hydro Quebec’s central system with up to 600 Smart Meter collectors, which aggregate data relayed from the province’s 3.8-million Smart Meters.[citation needed] In the UK, Telefonica won on a €1.78 billion ($2.4 billion) smart-meter contract to provide connectivity services over a period of 15 years in the central and southern regions of the country. The contract is the industry’s biggest deal yet.

A third application is to use wireless networks to update digital billboards. This allows advertisers to display different messages based on time of day or day-of-week, and allows quick global changes for messages, such as pricing changes for gasoline.

The industrial machine to machine market is undergoing a fast transformation as enterprises are increasingly realizing the value of connecting geographically dispersed people, devices, sensors and machines to corporate networks. Today, industries such as oil and gas, precision agriculture, military, government, smart cities/municipalities, manufacturing, and public utilities, among others, utilize machine to machine technologies for a myriad of applications. Many companies have enabled complex and efficient data networking technologies to provide capabilities such as high-speed data transmission, mobile mesh networking, and 3G/4G cellular backhaul.

Telematics and in-vehicle entertainment is an area of focus for machine to machine developers. Recent examples include Ford Motor Company, which has teamed with AT&T to wirelessly connect Ford Focus Electric with an embedded wireless connection and dedicated app that includes the ability for the owner to monitor and control vehicle charge settings, plan single- or multiple-stop journeys, locate charging stations, pre-heat or cool the car.[citation needed] In 2011, Audi partnered with T-Mobile and RACO Wireless to offer Audi Connect. Audi Connect allows users access to news, weather, and fuel prices while turning the vehicle into a secure mobile Wi-Fi hotspot, allowing passengers access to the Internet.

Networks in prognostics and health management

Machine to machine wireless networks can serve to improve the production and efficiency of machines, to enhance the reliability and safety of complex systems, and to promote the life-cycle management for key assets and products. By applying Prognostic and Health Management (PHM) techniques in machine networks, the following goals can be achieved or improved:

  • Near-zero downtime performance of machines and system;
  • Health management of a fleet of similar machines.

The application of intelligent analysis tools and Device-to-Business (D2B) TM informatics platform form the basis of e-maintenance machine network that can lead to near-zero downtime performance of machines and systems. The e-maintenance machine network provides integration between the factory floor system and e-business system, and thus enables the real time decision making in terms of near-zero downtime, reducing uncertainties and improved system performance. In addition, with the help of highly interconnected machine networks and advance intelligent analysis tools, several novel maintenance types are made possible nowadays. For instance, the distant maintenance without dispatching engineers on-site, the online maintenance without shutting down the operating machines or systems, and the predictive maintenance before a machine failure become catastrophic. All these benefits of e-maintenance machine network add up improve the maintenance efficiency and transparency significantly.

As described in, The framework of e-maintenance machine network consists of sensors, data acquisition system, communication network, analytic agents, decision-making support knowledge base, information synchronization interface and e-business system for decision making. Initially, the sensors, controllers and operators with data acquisition are used to collect the raw data from equipment and send it out to Data Transformation Layer automatically via internet or intranet. The Data Transform Layer then employs signal processing tools and feature extraction methods to convert the raw data into useful information. This converted information often carries rich information about the reliability and availability of machines or system and is more agreeable for intelligent analysis tools to perform subsequent process. The Synchronization Module and Intelligent Tools comprise the major processing power of the e-maintenance machine network and provide optimization, prediction, clustering, classification, bench-marking and so on. The results from this module can then be synchronized and shared with the e-business system on for decision making. In real application, the synchronization module will provide connection with other departments at the decision making level, like Enterprise Resource Planning (ERP), Customer Relation Management (CRM) and Supply Chain Management (SCM).

Another application of machine to machine network is in the health management for a fleet of similar machines using clustering approach. This method was introduced to address the challenge of developing fault detection models for applications with non-stationary operating regimes or with incomplete data. The overall methodology consists of two stages:

  1. Fleet Clustering to group similar machines for sound comparison;
  2. Local Cluster Fault Detection to evaluate the similarity of individual machines to the fleet features.

The purpose of fleet clustering is to aggregate working units with similar configurations or working conditions into a group for sound comparison and subsequently create local fault detection models when global models cannot be established. Within the framework of peer to peer comparison methodology, the machine to machine network is crucial to ensure the instantaneous information share between different working units and thus form the basis of fleet level health management technology.

The fleet level health management using clustering approach was patented for its application in wind turbine health monitoringafter validated in a wind turbine fleet of three distributed wind farms. Different with other industrial devices with fixed or static regimes, wind turbine’s operating condition is greatly dictated by wind speed and other ambient factors. Even though the multi-modeling methodology can be applicable in this scenario, the number of wind turbines in a wind farm is almost infinite and may not present itself as a practical solution. Instead, by leveraging on data generated from other similar turbines in the network, this problem can be properly solved and local fault detection models can be effective built. The results of wind turbine fleet level health management reported in demonstrated the effectiveness of applying a cluster-based fault detection methodology in the wind turbine networks.

Fault detection for a horde of industrial robots experiences similar difficulties as lack of fault detection models and dynamic operating condition. Industrial robots are crucial in automotive manufacturing and perform different tasks as welding, material handling, painting, etc. In this scenario, robotic maintenance becomes critical to ensure continuous production and avoid downtime. Historically, the fault detection models for all the industrial robots are trained similarly. Critical model parameters like training samples, components, and alarming limits are set the same for all the units regardless of their different functionalities. Even though these identical fault detection models can effectively identify faults sometimes, numerous false alarms discourage users from trusting the reliability of the system. However, within a machine network, industrial robots with similar tasks or working regimes can be group together; the abnormal units in a cluster can then be prioritized for maintenance via training based or instantaneous comparison. This peer to peer comparison methodology inside a machine network could improve the fault detection accuracy significantly.

Open Initiatives

  • Eclipse machine to machine industry working group (open communication protocols, tools, and frameworks), the umbrella of various projects including Koneki, Eclipse SCADA
  • ITU-T Focus Group M2M (global standardization initiative for a common M2M service layer)[39]
  • 3GPP studies security aspects for machine to machine (M2M) equipment, in particular automatic SIM activation covering remote provisioning and change of subscription.[40]
  • Weightless – standard group focusing on using TV “white space” for M2M
  • XMPP (Jabber) protocol[41]
  • OASIS MQTT – standards group working on a lightweight publish/subscribe reliable messaging transport protocol suitable for communication in M2M/IoT contexts.[27]
  • Open Mobile Alliance (OMA_LWM2M) protocol[42]
  • RPMA (Ingenu)
  • Industrial Internet Consortium

Mengenal Jenis-Jenis Cloud Computing Berdasarkan Fungsinya

Source: progresstech . March 30, 2017 . David Wong

Cloud Computing? pasti banyak dari kita yang sudah sering dengar kata tersebut, atau jika belum pernah dengar, mungkin pernah dengar istilah dalam bahasa Indonesia-nya, yaitu “Komputasi Awan”.

Apakah komputer di awan? Bukan berarti komputernya ada di awan. Ada banyak sudut pandang untuk menjelaskan apa itu Cloud Computing, Wikipedia sendiri menjelaskan Cloud Computing dengan cukup jelas.

Cloud Computing atau komputasi awan merupakan kombinasi pemanfaatan teknologi komputer dengan pengembangan berbasis internet. Sebutan cloud sendiri merupakan sebuah istilah yang diberikan pada teknologi jaringan internet.

Pada teknlogi komputasi berbasis awan semua data berada dan disimpan di server internet, begitu juga dengan aplikasi ataupun software yang pada umumnya dibutuhkan pengguna semuanya berada di komputer server.

Sehingga kita tidak perlu melakukan instalasi pada server. Tetapi pengguna harus terhubung ke internet untuk bisa mengakses dan menjalankan aplikasi yang berada di server tersebut.

Dengan kata lain pengguna bisa saja hanya menyediakan sebuah komputer dan perangkat jaringan internet untuk bisa terhubung ke server internet dan menyimpan data di komputer server tanpa harus menyediakan hard-disk yang berkapasitas besar pada komputernya sendiri untuk menyimpan datanya.

Begitu juga dengan program aplikasi katakanlah seperti Microsoft Office, Excel dan lain sebagainya pengguna bisa menjalankan aplikasi tersebut di server internet sehingga tidak perlu repot-repot untuk menginstal aplikasi tersebut di komputernya sendiri.

Dengan Kata lain kita tidak perlu investasi server berbentuk fisik, kita tidak perlu maintain hardware server. Bagaimana sangat bagus bukan?

Jenis-jenis Cloud Computing

Berdasarkan jenis layanan-nya, Cloud Computing dibagi menjadi berikut ini:

Software as a Service (SaaS)

Adalah salah satu layanan dari Cloud Computing dimana kita tinggal memakai software (perangkat lunak) yang telah disediakan. User hanya tahu bahwa perangkat lunak bisa berjalan dan bisa digunakan dengan baik.

Contoh, layanan email publik (Gmail, YahooMail, Hotmail), social network (Facebook, Twitter, LinkedIn) instant messaging (Yahoo Messenger, Skype, Line, WhatsApp) dan masih banyak lagi yang lain.

Dalam perkembangan-nya, banyak perangkat lunak yang dulu hanya kita bisa nikmati dengan menginstall aplikasi tersebut di komputer kita (on-premise) mulai sekarang bisa kita nikmati lewat Cloud Computing.

Keuntungan-nya, kita tidak perlu membeli lisensi dan tinggal terkoneksi ke internet untuk memakai-nya. Contoh, Microsoft Office yang sekarang kita bisa nikmati lewat Office 365, Adobe Suite yang bisa kita nikmati lewat Adobe Creative Cloud.

Platform as a Service (PaaS)

Adalah layanan dari Cloud Computing kalau kita analogikan dimana kita menyewa “rumah” berikut lingkungan-nya (sistem operasi, network, database engine, framework aplikasi, dll), untuk menjalankan aplikasi yang kita buat.

Kita tidak perlu pusing untuk menyiapkan “rumah” dan memelihara “rumah” tersebut. Yang penting aplikasi yang kita buat bisa berjalan dengan baik di “rumah” tersebut. Untuk pemeliharaan “rumah” ini menjadi tanggung jawab dari penyedia layanan.

Sebagai analogi, misal-nya kita sewa kamar hotel, kita tinggal tidur di kamar yang sudah kita sewa, tanpa peduli bagaimana “perawatan” dari kamar dan lingkungan-nya. Yang penting, kita bisa nyaman tinggal di kamar itu, jika suatu saat kita dibuat tidak nyaman, tinggal cabut dan pindah ke hotel lain yang lebih bagus layanan-nya.

Contoh penyedia layanan PaaS ini adalah: Amazon Web Service, Windows Azure, bahkan tradisional hosting-pun merupakan contoh dari PaaS.

Keuntungan dari PaaS adalah kita sebagai pengembang bisa fokus pada aplikasi yang kita buat, tidak perlu memikirkan operasional dari “rumah” untuk aplikasi yang kita buat.

Infrastructure as a Service (IaaS)

Adalah layanan dari Cloud Computing dimana kita bisa “menyewa” infrastruktur IT (komputasi, storage, memory, network). Kita bisa definisikan berapa besar-nya unit komputasi (CPU), penyimpanan data (storage), memory (RAM), bandwith, dan konfigurasi lain-nya yang akan kita sewa.

Mudah-nya, IaaS ini adalah menyewa komputer virtual yang masih kosong, dimana setelah komputer ini disewa kita bisa menggunakan-nya terserah dari kebutuhan kita. Kita bisa install sistem operasi dan aplikasi apapun diatas-nya.

Contoh penyedia layanan IaaS ini adalah: Amazon EC2, Windows Azure (soon), TelkomCloud, BizNetCloud, dan sebagainya.

Keuntungan dari IaaS ini adalah kita tidak perlu membeli komputer fisik, dan konfigurasi komputer virtual tersebut bisa kita rubah (scale up/scale down) dengan mudah. Sebagai contoh, saat komputer virtual tersebut sudah kelebihan beban, kita bisa tambahkan CPU, RAM, Storage dan lainnya dengan segera.

Public Cloud, Private Cloud dan Hybrid Cloud

Setelah kita bahas apa itu Cloud Computing dan jenis layanan-nya, sekarang kita bahas tentang berberapa terminologi yang sering dipakai dalam Cloud Computing. Kita mulai dari 3 terminologi berikut: Public Cloud, Private Cloud dan Hybrid Cloud.

Public Cloud

Adalah layanan Cloud Computing yang disediakan untuk masyarakat umum. Kita sebagai user tinggal mendaftar ataupun bisa langsung memakai layanan yang ada. Banyak layanan Public Cloud yang gratis, dan ada juga yang perlu membayar untuk bisa menikmati layanan-nya.

Contoh Public Cloud yang gratis: Windows Live Mail, GoogleMail, Facebook, Twitter dan sebagainya.

Contoh Public Cloud yang berbayar: SalesForce, Office 365, Adobe Creative Cloud, Windows Azure, Amazon EC2, dan sebagainya.

Keuntungan:

Kita tidak perlu investasi dan merawat infrastruktur, platform ataupun aplikasi. Tinggal pakai secara gratis (untuk layanan yang gratis) atau bayar sejauh pemakaian kita (pay as you go).

Kerugian:

Sangat tergantung dengan kualitas layanan internet yang kita pakai, jika koneksi internet mati, kita tidak bisa memakai layanan-nya. Untuk itu kita perlu pikirkan secara matang infrastruktur internet-nya.

Tidak semua penyedia layanan, menjamin keamanan data kita. Untuk itu kita perlu hati-hati untuk memilih provider Public Cloud ini. Pelajari dengan seksama profil dan Service Level Agreement dari penyedia layanan.

Private Cloud

Adalah layanan Cloud Computing, yang disediakan untuk memenuhi kebutuhan internal dari organisasi/perusahaan. Biasa-nya departemen IT akan berperan sebagai Service Provider (penyedia layanan) dan departemen lain menjadi user (pemakai).

Sebagai Service Provider tentu saja Departemen IT harus bertanggung jawab agar layanan bisa berjalan dengan baik sesuai dengan standar kualitas layanan yang telah ditentukan oleh perusahaan, baik infrastruktur, platform maupun aplikasi yang ada.

Contoh layanannya:

SaaS: Web Application internal, Sharepoint, Mail Server internal, Database Server untuk keperluan internal.
PaaS: Sistem Operasi + Web Server + Framework + Database yang disediakan untuk internal
IaaS: Virtual Machine yang bisa di-request sesuai dengan kebutuhan internal

Keuntungan:

Keamanan data terjamin, karena dikelola sendiri.
Menghemat bandwith internet ketika layanan itu hanya diakses dari jaringan internal.
Proses bisnis tidak tergantung dengan koneksi internet, tapi tetap saja tergantung dengan koneksi internet lokal (intranet).

Kerugian:

Investasi besar, karena kita sendiri yang harus menyiapkan infrastruktur-nya.
Butuh tenaga kerja untuk merawat dan menjamin layanan berjalan dengan baik.

Hybrid Cloud

Adalah gabungan dari layanan Public Cloud dan Private Cloud yang di-implementasikan oleh suatu organisasi/perusahaan. Dalam Hybrid Cloud ini, kita bisa memilih proses bisnis mana yang bisa dipindahkan ke Public Cloud dan proses bisnis mana yang harus tetap berjalan di Private Cloud.

Contohnya:
Perusahaan A
, menyewa layanan dari Windows Azure (Public Cloud) sebagai “rumah” yang dipakai untuk aplikasi yang mereka buat, tapi karena aturan undang-udang yang berlaku, data nasabah dari perusahaan A tidak boleh ditaruh di pihak ketiga, karena perusahaan A taat pada aturan yang ada, maka data dari nasabah tetap disimpan di database mereka sendiri (Private Cloud), dan aplikasi akan melakukan koneksi ke database internal tersebut.

Perusahaan B, menyewa layanan dari Office 365 (Public Cloud), karena perusahaan B tersebut sudah punya Active Directory yang berjalan diatas Windows Server mereka (Private Cloud) maka kita bisa konfigurasikan Active Directory tersebut sebagai identity untuk login di Office 365.

Keuntungan:
Keamanan data terjamin, karena data bisa dikelola sendiri (hal ini TIDAK berarti bahwa menyimpan data di public cloud tidak aman ya).

Lebih leluasa untuk memilih mana proses bisnis yang harus tetap berjalan di private cloud dan mana proses bisnis yang bisa dipindahkan ke public cloud dengan tetap menjamin integrasi dari kedua-nya.

Kerugian:
Untuk aplikasi yang membutuhkan integrasi antara public cloud dan private cloud, maka infrastruktur internet harus dipikirkan secara matang.

Komputasi Awan

Dari Wikipedia bahasa Indonesia, ensiklopedia bebas

Diagram konsepsual dari Komputasi awan

Komputasi Awan (bahasa Inggris: cloud computing) adalah gabungan pemanfaatan teknologi komputer (‘komputasi’) dan pengembangan berbasis Internet (‘awan’). Awan (cloud) adalah metafora dari internet, sebagaimana awan yang sering digambarkan di diagram jaringan komputer. Sebagaimana awan dalam diagram jaringan komputer tersebut, awan (cloud) dalam Cloud Computing juga merupakan abstraksi dari infrastruktur kompleks yang disembunyikannya. Ia adalah suatu metoda komputasi di mana kapabilitas terkait teknologi informasi disajikan sebagai suatu layanan (as a service), sehingga pengguna dapat mengaksesnya lewat Internet (“di dalam awan”) tanpa mengetahui apa yang ada didalamnya, ahli dengannya, atau memiliki kendali terhadap infrastruktur teknologi yang membantunya. Menurut sebuah makalah tahun 2008 yang dipublikasi IEEE Internet Computing Cloud Computing adalah suatu paradigma di mana informasi secara permanen tersimpan di server di internet dan tersimpan secara sementara di komputer pengguna (client) termasuk di dalamnya adalah desktop, komputer tablet, notebook, komputer tembok, handheld, sensor-sensor, monitor dan lain-lain.

Komputasi Awan adalah suatu konsep umum yang mencakup SaaS, Web 2.0, dan tren teknologi terbaru lain yang dikenal luas, dengan tema umum berupa ketergantungan terhadap Internet untuk memberikan kebutuhan komputasi pengguna. Sebagai contoh, Google Apps menyediakan aplikasi bisnis umum secara daring yang diakses melalui suatu penjelajah web dengan perangkat lunak dan data yang tersimpan di server. Komputasi awan saat ini merupakan trend teknologi terbaru, dan contoh bentuk pengembangan dari teknologi Cloud Computing ini adalah iCloud.

Daftar isi
1 Sejarah Komputasi Awan
1.1 Tahun 1960
1.2 Tahun 1995
1.3 Akhir Era -90
1.4 Tahun 2000
1.5 2005 – Sekarang
2 Manfaat Komputasi Awan
3 Layanan Komputasi Awan
3.1 Infrastructure as a Service (IaaS)
3.2 Platform as a Service (PaaS)
3.3 Software as a Service (SaaS)
4 Metoda dan Implementasi Komputasi Awan
4.1 Metoda atau Cara Kerja Komputasi Awan
4.2 Implementasi Komputasi Awan
4.2.1 Implementasi Cloud Computing dalam pemerintahan (E-Goverment)
5 Masalah yang dihadapi
6 Contoh Komputasi Awan
6.1 Google Drive
6.1.1 Fitur-fitur Google Drive
6.2 Windows Azure
6.2.1 Fitur-fitur Windows Azure

Sejarah Komputasi Awan

Pada tahun 50-an, Cloud Computing memiliki konsep yang mendasar. Ketika komputer mainframe yang tersedia dalam skala yang besar dalam dunia pendidikan dan perusahaan dapat diakses melalui komputer terminal disebut dengan Terminal Statis. Terminal tersebut hanya dapat digunakan untuk melakukan komunikasi tetapi tidak memiliki kapasitas pemrosesan internal. Agar penggunaan mainframe yang relatif mahal menjadi efisien maka mengembangkan akses fisik komputer dari pembagian kinerja CPU. Hal ini dapat menghilangkan periode tidak aktif pada mainframae, memungkinkan untuk kembali pada investasi. Hinga pertengahan tahun 70-an dikenal dengan RJE remote proses Entry Home Job yang berkaitan besar dengan IBM dan DEC Mainframe.

Tahun 60-an, John McCarthy berpendapat bahwa “Perhitungan suatu hari nanti dapat diatur sebagai utilitas publik.” Di buku Douglas Parkhill, The Challenge of the Computer Utility menunjukkan perbandingan idustri listrik dan penggunaan pada listrik di masyarakat umum dan pemerintahan dalam penyediaan cloud computing. Ketika Ilmuan Herb Grosch mendalilkan bahwa seluruh dunia akan beroperasi pada terminal bodah didukung oleh sekitar 15 pusat data yang besar. Karena komputer ini sangat canggih, banyak perusahaan dan entitas lain menyediakan sendiri kemampuan komputasi melalui berbagai waktu danbeberapa organisasi, seperti GE GEISCO, Anak perusahaan IBM Biro Corporation, Tymshare, CSS Nasional, Data Dial, Bolt, dan Beranek and Newman.

Tahun 90-an, perusahaan telekomunikasi mulai menawarkan VPN layanan jaringan pribadi dengan kualitas sebanding pelayanannya, tapi dengan biaya yang lebih rendah. Karena merasa cocok dengan hal tersebut untuk menyeimbangkan penggunaan server, mereka dapat menggunakan bandwidth jaringan secara keseluruhan. Lalu menggunakan simbol awan sebagai penunjuk titik demarkasi antara penyedia dan pengguna yang saling bertanggung jawab. Cloud computing memperluas batas iniuntuk menutup server serta infrastruktur jaringan.

Sejak Tahun 2000, Amazon sebagai peran penting dalam semua pengembangan cloud computing dengan memodernisasi pusat data, seperti jaringan komputer yang menggunakan sesedikit 10% dari kapasitas mereka pada satu waktu. Setelah menemukan asitektur awan baru, mengalami peningkatan efisiensi internal sedikit bergerak capat “Tim Dua-Pizza”(Tim kecil untuk memberi makan dengan dua pizza) dapat menambahkan fitur baru dengan cepat dan lebih mudah. Kemudian Amazon mulai mengembangkan produk baru sebagai penyedia cloud computing untuk pelanggan eksternalm dan meluncurkan Amzaon Web Service (AWS) tahun 2006.

Awal tahun 2008, Eucalypus menjadi yang pertama open source, AWS API Platform yang kompatibel menyebarkan awan swasta. Open Nebula ditingkatkan dalam proyek Eropa Reservoir Komisi yang sudah didanai. Pada tahun yang sama, agar difokuskan pada penyediaan jaminan kualitas layanan (seperti yang dipersyaratkan oleh aplikasi interaktif real-time) untuk infrastruktur berbasis cloud dalam rangka IRMOS Eropa Proyek yang didanai Komisi. Pertengahan 2008, Gartner melihat kesempatan untuk membentuk hubungan antara konsumen layanan TI, mereka menggunakan layanan TI dan menjualnya. Dan mengamati bahwa “Organisasi layanan TI yang beralih dari perangkat keras milik perusahaan dan aset perangkat lunak untuk digunakan layanan berbasis model sehingga pergeseran diproyeksikan untuk komputasi…..akan menghasilkan pertumbuhan dramatis dalam produk IT di beberapadaerahdan pengurangan yang signifikan di daerah lain.”.

Tanggal 1 Maret 2011,IBM mengumumkan SmartCloud kerangka IBM Smarter Planet untuk mendukung. Di antara berbagai komponen dasar Smarter Computing, cloud computing adalah bagian yang paling penting.

Tahun 1960

John McCarthy, Pakar Komputasi dan kecerdasan buatan dari MIT. “Suatu hari nanti, komputasi akan menjadi Infrastruktur publik seperti halnya listrik dan telepon.” Ini adalah sebuah ide yang mengawali suatu bentuk komputasi yang kita kenal dengan istilah Komputasi awan.

Tahun 1995

Larry Ellison, pendiri perusahaan Oracle. “Network Computing” Ide ini sebenarnya cukup unik dan sedikit menyindir perusahaan Microsoft pada saat itu. Intinya, kita tidak harus “menanam” berbagai perangkat lunak kedalam PC pengguna, mulai dari sistem operasi hingga perangkat lunak lainya. Cukup dengan koneksi dengan server dimana akan disediakan sebuah environment yang mencakup berbagai kebutuhan PC pengguna.

Pada era ini juga wacana “Network Computing” cukup populer. Banyak perusahaan yang menggalang sistem ini contohnya Sun Mycrosystem dan Novell Netware. Disayangkan kualitas jaringan komputer saat itu masih belum memadai, penggunapun cenderung memilih PC karena cenderung lebih cepat.

Akhir Era -90

Lahir konsep ASP (Application Service Provider) yang ditandai dengan kemunculan perusahaan pusat pengolahan data. Ini merupakan sebuah perkembangan pada kualitas jaringan komputer. Akses untuk pengguna menjadi lebih cepat.

Tahun 2000

Marc Benioff, mantan wakil presiden perusahaan Oracle. “salesforce.com” ini merupakan sebuah perangkat lunak CRM dengan basis SaaS (Software as a Service). Tak disangka gebrakan ini mendapat tanggapan hebat. Sebagai suksesor dari visi Larry Ellison, boss-nya. Dia memiliki sebuah misi yaitu “The End of Software”.

2005 – Sekarang

Cloud Computing sudah semakin meningkat popularitasnya, dari mulai penerapan sistem, pengunaan nama, dll. Amazon . com dengan EC2 (Elastic Computer Cloud); Google dengan Google App. Engine; IBM dengan Blue Cord Initiative; dsb. Perhelatan cloud computing meroket sebagaimana berjalanya waktu. Sekarang, sudah banyak sekali pemakaian sistem komputasi itu, ditambah lagi dengan sudah meningkatnya kualitas jaringan komputer dan beragamnya gadget yang ada. Contoh dari pengaplikasianya adalah Evernote, Dropbox, Google Drive, Sky Drive, Youtube, Scribd, dll.

Manfaat Komputasi Awan

Dari penjelasan tentang cloud computing diatas, ada banyak manfaat yang bisa kita ambil dari cloud computing, yaitu:

  • Skalabilitas, yaitu dengan cloud computing kita bisa menambah kapasitas penyimpanan data kita tanpa harus membeli peralatan tambahan, misalnya hardisk dll. Kita cukup menambah kapasitas yang disediakan oleh penyedia layanan cloud computing.
  • Aksesibilitas, yaitu kita bisa mengakses data kapanpun dan dimanapun kita berada, asal kita terkoneksi dengan internet, sehingga memudahkan kita mengakses data disaat yang penting.
  • Keamanan, yaitu data kita bisa terjamin keamanan nya oleh penyedia layanan cloud computing, sehingga bagi perusahaan yang berbasis IT, data bisa disimpan secara aman di penyedia cloud computing. Itu juga mengurangi biaya yang diperlukan untuk mengamankan data perusahaan.
  • Kreasi, yaitu para user bisa melakukan/mengembangkan kreasi atau project mereka tanpa harus mengirimkan project mereka secara langsung ke perusahaan, tapi user bisa mengirimkan nya lewat penyedia layanan cloud computing.
  • Kecemasan, ketika terjadi bencana alam data milik kita tersimpan aman di cloud meskipun hardisk atau gadget kita rusak

Layanan Komputasi Awan

Infrastructure as a Service (IaaS)

Infrastructure as a Service adalah layanan komputasi awan yang menyediakan infrastruktur IT berupa CPU, RAM, storage, bandwith dan konfigurasi lain. Komponen-komponen tersebut digunakan untuk membangun komputer virtual. Komputer virtual dapat diinstal sistem operasi dan aplikasi sesuai kebutuhan. Keuntungan layanan IaaS ini adalah tidak perlu membeli komputer fisik sehingga lebih menghemat biaya. Konfigurasi komputer virtual juga bisa diubah sesuai kebutuhan. Misalkan saat storage hampir penuh, storage bisa ditambah dengan segera. Perusahaan yang menyediakan IaaS adalah Amazon EC2, TelkomCloud dan BizNetCloud.

Platform as a Service (PaaS)

Platform as a Service adalah layanan yang menyediakan computing platform. Biasanya sudah terdapat sistem operasi, database, web server dan framework aplikasi agar dapat menjalankan aplikasi yang telah dibuat. Perusahaan yang menyediakan layanan tersebutlah yang bertanggung jawab dalam pemeliharaan computing platform ini. Keuntungan layanan PaaS ini bagi pengembang adalah mereka bisa fokus pada aplikasi yang mereka buat tanpa memikirkan tentang pemeliharaan dari computing platform. Contoh penyedia layanan PaaS adalah Amazon Web Service dan Windows Azure.

Software as a Service (SaaS)

Software as a Service adalah layanan komputasi awan dimana kita bisa langsung menggunakan aplikasi yang telah disediakan. Penyedia layanan mengelola infrastruktur dan platform yang menjalankan aplikasi tersebut. Contoh layanan aplikasi email yaitu gmail, yahoo dan outlook sedangkan contoh aplikasi media sosial adalah twitter, facebook dan google+. Keuntungan dari layanan ini adalah pengguna tidak perlu membeli lisensi untuk mengakses aplikasi tersebut. Pengguna hanya membutuhkan perangkat klien komputasi awan yang terhubung ke internet. Ada juga aplikasi yang mengharuskan pengguna untuk berlangganan agar bisa mengakses aplikasi yaitu Office 365 dan Adobe Creative Cloud.

Metoda dan Implementasi Komputasi Awan

Metoda atau Cara Kerja Komputasi Awan

Berikut merupakan cara kerja penyimpanan data dan replikasi data pada pemanfaatan teknologi cloud computing. Dengan Cloud Computing komputer lokal tidak lagi harus menjalankan pekerjaan komputasi berat untuk menjalankan aplikasi yang dibutuhkan, tidak perlu menginstal sebuah paket perangkat lunak untuk setiap komputer, kita hanya melakukan installasi operating system pada satu aplikasi[8]. Jaringan komputer yang membentuk awan (internet) menangani mereka sebagai gantinya. Server ini yang akan menjalankan semuanya aplikasi mulai dari e-mail, pengolah kata, sampai program analisis data yang kompleks. Ketika pengguna mengakses awan (internet) untuk sebuah website populer, banyak hal yang bisa terjadi. Pengguna Internet Protokol (IP) misalnya dapat digunakan untuk menetapkan dimana pengguna berada (geolocation). Domain Name System (DNS) jasa kemudian dapat mengarahkan pengguna ke sebuah cluster server yang dekat dengan pengguna sehingga situs bisa diakses dengan cepat dan dalam bahasa lokal mereka. Pengguna tidak login ke server, tetapi mereka login ke layanan mereka menggunakan id sesi atau cookie yang telah didapatkan yang disimpan dalam browser mereka. Apa yang user lihat pada browser biasanya datang dari web server. Webservers menjalankan perangkat lunak dan menyajikan pengguna dengan cara interface yang digunakan untuk mengumpulkan perintah atau instruksi dari pengguna (klik, mengetik, upload dan lain-lain) Perintah-perintah ini kemudian diinterpretasikan oleh webservers atau diproses oleh server aplikasi. Informasi kemudian disimpan pada atau diambil dari database server atau file server dan pengguna kemudian disajikan dengan halaman yang telah diperbarui. Data di beberapa server disinkronisasikan di seluruh dunia untuk akses global cepat dan juga untuk mencegah kehilangan data.

Web service telah memberikan mekanisme umum untuk pengiriman layanan, hal ini membuat Service-Oriented Architecture (SOA) ideal untuk diterapkan. Tujuan dari SOA adalah untuk mengatasi persyaratan yang bebas digabungkan, berbasis standar, dan protocol-independent distributed computing. Dalam SOA, sumber daya perangkat lunak yang dikemas sebagai “layanan,” yang terdefinisi dengan baik, modul mandiri yang menyediakan fungsionalitas bisnis standar dan konteks jasa lainnya. Kematangan web service telah memungkinkan penciptaan layanan yang kuat yang dapat diakses berdasarkan permintaan, dengan cara yang seragam.

Implementasi Komputasi Awan

Ada tiga poin utama yang diperlukan dalam implementasi cloud computing, yaitu:

  • Computer Front End

Computer back end dalam skala besar biasanya berupa server computer yang dilengkapi dengan data center dalam rak-rak besar. Pada umumnya computer back end harus mempunyai kinerja yang tinggi, karena harus melayani mungkin hinggga ribuan permintaan data.

  • Penghubung antara keduanya

Penghubung keduanya bisa berupa jaringan LAN atau internet.

Implementasi Cloud Computing dalam pemerintahan (E-Goverment)

Cloud Computing dalam pemerintahan (E-Goverment) dapat mendongkrak kinerja khususnya dalam bidang pemerintahan. E-Goverment dapat membantu para staff di bidang pemerintahan untuk memberikan pelayanan yang lebih baik ke masyarakat. Pemerintah dalam negara Indonesia telah menggunakan cloud computing. Contoh pertama yaitu sebagai penyediaan sumber informasi. Badan Pengkajian Dan Penerapan Teknologi (BPPT) telah menyediakan layanan Cloud Computing sebagai layanan jasa alih daya pengelolaan TIK untuk instansi pemerintah. Layanan ini bertujuan untuk dapat mewujudkan percepatan e-government, karena memungkinkan pengguna pemerintah berkonsentrasi dalam memberikan layanan dan tidak dipusingkan dengan konfigurasi maupun pemeliharan perangkat teknologi informasi.

Masalah yang dihadapi

Dunia komputasi awan merupakan dunia baru karena tidak semua orang mengetahui teknologi baru tersebut. Karena masih baru tersebut muncul beberapa masalah dalam pengenalannya ke dunia luar. Contohnya komputasi awan merupakan sarana penyimpanan data melalui jaringan internet maka internet wajib bagi pemakai komputasi awan apabila terjadi masalah dalam internet maka akan menyebabkan komputer tersebut menjadi lambat karena proses yang terlalu lama. Masalah lain adalah jika suatu perusahaan menggunakan komputasi awan dalam penyimpanan datanya maka akan sangat tergantung pada vendor (penyedia layanan komputasi awan) karena perusahaan tersebut tidak mempunyai server langsung dalam komputasi awan dan juga apabila vendor mempunyai layanan backup yang buruk atau server pada vendor rusak akan menyebabka kerugian besar pada perusahaan tersebut karena semua data yang tersimpan pada vendor akan mengalami masalah. Jika ingin menggunakan komputasi awan juga harus tersedia bandwidth yang besar karena data yang keluar masuk dalam sebuah akun tidak sedikit, maka dari itu dibutuhkan bandwidth yang berukuran besar agar mampu menampung data yang ditransfer. Masalah keamanan dan privasi menjadi masalah baru karena jika kita sudah meletakkan suatu data dalam internet maka itu bisa dilihat oleh masyarakat luas apabila data tersebut sangat rahasia maka bisa menyebabkan kefatalan dalam mengelola sesuatu. Selain itu belum banyak dukungan dari berbagai pihak karena beberapa masalah dalam komputasi awan. Beberapa masalah yang timbul disebabkan karena masih barunya teknologi komputasi awan dalam penyimpanan sebuah data dalam internet. Masalah lain yang dapat timbul selain diatas adalah dengan banyak para peretas yang muncul dari berbagai dunia dalam meretas internet membuat vendor harus berhati-hati dalam mengelola sumber daya yang dipakai dalam komputasi awan.

Contoh Komputasi Awan

Google Drive

Google Drive adalah layanan penyimpanan Online yang dimiliki Google. Google Drive diluncurkan pada tanggal 24 April 2012. Sebenarnya Google Drive merupakan pengembangan dari Google Docs. Google Drive memberikan kapasitas penyimpanan sebesar 5GB kepada setiap penggunanya. Kapasitas tersebut dapat ditambahkan dengan melakukan pembayaran atau pembelian Storage. Penyimpanan file di Google Drive dapat memudahkan pemilik file dapat mengakses file tersebut kapanpun dan dimanapun dengan menggunakan komputer desktop, laptop, komputer tablet ataupun smartphone. File tersebut juga dapat dengan mudah dibagikan dengan orang lain untuk berbagi pakai ataupun melakukan kolaborasi dalam pengeditan.

Fitur-fitur Google Drive

  • Penyimpanan gratis sebesar 15GB

Google Drive memberikan fasilitas penyimpanan sebesar 15GB kepada penggunanya dengan cuma-cuma untuk menyimpan dokumen, baik berupa gambar, video, musik, ataupun file-file lain.

  • Memungkinkan membuat dokumen

Pada fitur ini Google Drive memungkinkan para penggunanya untuk membuat dokumen, seperti mengolah data, mengolah angka, membuat presentasi, form dan dokumen lainnya.

  • Berbagi file

Google Drive memudahkan untuk berbagi file dengan orang lain, dan juga memudahkan orang lain untuk melakukan pengeditan terhadap file yang kita buat.

  • Terintegrasi dengan layanan Google lainnya

Para pengguna layanan Google lainnya akan merasakan kemudahan dalam memanagement file dari Google Drive. Karena Google Drive secara otomatis terintegrasi dengan layanan google lainnya.

  • Fasilitas pencarian

Google Drive memberikan layanan pencarian yang lebih baik dan lebih cepat untuk para penggunanya dengan menggunakan kata kunci tertentu. Google Drive juga dapat mengenali gambar atau teks dari dokumen hasil scan.

  • Menampilkan berbagai file

Lebih dari 30 type file yang dapat dibuka dan ditampilkan oleh Google Drive, termasuk file video, file image, dan lain-lain tanpa mengharuskan pengguna untuk mengunduh dan menginstal software yang sesuai dengan tipe atau ekstensi file tersebut.

  • Menjalankan aplikasi

Google Drive juga mempunyai kemampuan untuk membuat, menjalankan dan membagi file aplikasi favorit yang dimiliki oleh pengguna.

Windows Azure

Windows Azure adalah sistem operasi yang berbasis komputasi awan, dibuat oleh Microsoft untuk mengembangkan dan mengatur aplikasi serta melayani sebuah jaringan global dari Microsoft Data Centers. Windows Azure yang mendukung berbagai macam bahasa dan alat pemograman. Sistem operasi ini dirilis pada 1 Februari 2010.

Fitur-fitur Windows Azure

  • Layanan Infrastruktur

Windows Azure menyediakan infrastruktur dengan skala yang sesuai dengan kebutuhan. Baik dalam membuat aplikasi baru atau menjalankan aplikasi yang telah disediakan.

  • Kembangkan dan Lakukan Percobaan

Windows Azure memungkinkan pengguna untuk melakukan pengembangan aplikasi dan langsung melakukan percobaan pada aplikasi tersebut secara cepat.

  • Big Data

Windows Azure menyediakan kapasitas data yang besar. Kapasitas ini didukung oleh Apache Hadoop.

  • Aplikasi Mobile

Windows Azure memberikan kemudahan dalam pembuatan aplikasi mobile. Aplikasi yang telah dibuat dan dapat langsung dimasukan ke penyimpanan komputasi awan.

  • Media

Layanan Media Windows Azure memperbolehkan untuk mengembangkan solusi penyebaran media, yang mana bisa menampilkan media dari Adobe Flash, Android, iOS, Windows, dan platform lainnya.

  • Aplikasi Web

Windows Azure menawarkan keamanan dan fleksibilitas pengembangan, penyebaran, dan pilihan skala untuk berbagai macam ukuran aplikasi web.

  • Penyimpanan, Pencadangan, dan Pemulihan

Windows Azure menyediakan penyimpanan, pencadangan, dan solusi pemulihan data apapun.

  • Identitas dan Manajemen Akses

Windows Azure Active Directory memberikan layanan pengamanan pada identitas perusahaan. Serta melakukan manajemen pada banyak pengguna di sebuah perusahaan.

  • Integrasi

Windows Azure memperbolehkan pengguna untuk membawa seluruh aplikasi, data, perangkat, mitra ke perangkat lokal dan ke awan.

  • Manajemen Data

Windows Azure menyediakan solusi yang tepat untuk kebutuhan data pengguna.

Cloud Computing

From Wikipedia, the free encyclopedia

Cloud computing metaphor: the group of networked elements providing services need not be individually addressed or managed by users; instead, the entire provider-managed suite of hardware and software can be thought of as an amorphous cloud.

Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. The term is generally used to describe data centers available to many users over the Internet. Large clouds, predominant today, often have functions distributed over multiple locations from central servers. If the connection to the user is relatively close, it may be designated an edge server.

Contents
1 History
1.1 Early history
1.2 2000s
1.3 2010s
2 Similar concepts
3 Characteristics
4 Service models
4.1 Infrastructure as a service (IaaS)
4.2 Platform as a service (PaaS)
4.3 Software as a service (SaaS)
4.4 Mobile “backend” as a service (MBaaS)
4.5 Serverless computing
4.6 Function as a service (FaaS)
5 Deployment models
5.1 Private cloud
5.2 Public cloud
5.3 Hybrid cloud
5.4 Others
5.4.1 Community cloud
5.4.2 Distributed cloud
5.4.3 Multicloud
5.4.4 Big Data cloud
5.4.5 HPC cloud
6 Architecture
6.1 Cloud engineering
7 Security and privacy
8 Limitations and disadvantages
9 Emerging trends
10 Digital forensics in the cloud

Clouds may be limited to a single organization (enterprise clouds), be available to many organizations (public cloud), or a combination of both (hybrid cloud).

Cloud computing relies on sharing of resources to achieve coherence and economies of scale.

Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand. Cloud providers typically use a “pay-as-you-go” model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models.

The availability of high-capacity networks, low-cost computers and storage devices as well as the widespread adoption of hardware virtualization, service-oriented architecture and autonomic and utility computing has led to growth in cloud computing. By 2019, Linux was the most used type of operating systems used, including in Microsoft’s offerings and is thus described as dominant.

History

“Cloud computing” was popularized with Amazon.com releasing its Elastic Compute Cloud product in 2006, references to the phrase “cloud computing” appeared as early as 1996, with the first known mention in a Compaq internal document.

The cloud symbol was used to represent networks of computing equipment in the original ARPANET by as early as 1977, and the CSNET by 1981—both predecessors to the Internet itself. The word cloud was used as a metaphor for the Internet and a standardized cloud-like shape was used to denote a network on telephony schematics. With this simplification, the implication is that the specifics of how the end points of a network are connected are not relevant for the purposes of understanding the diagram.

The term cloud was used to refer to platforms for distributed computing as early as 1993, when Apple spin-off General Magic and AT&T used it in describing their (paired) Telescript and PersonaLink technologies. In Wired’s April 1994 feature “Bill and Andy’s Excellent Adventure II”, Andy Hertzfeld commented on Telescript, General Magic’s distributed programming language:

“The beauty of Telescript … is that now, instead of just having a device to program, we now have the entire Cloud out there, where a single program can go and travel to many different sources of information and create sort of a virtual service. No one had conceived that before. The example Jim White [the designer of Telescript, X.400 and ASN.1] uses now is a date-arranging service where a software agent goes to the flower store and orders flowers and then goes to the ticket shop and gets the tickets for the show, and everything is communicated to both parties.”

Early History

During the 1960s, the initial concepts of time-sharing became popularized via RJE (Remote Job Entry); this terminology was mostly associated with large vendors such as IBM and DEC. Full-time-sharing solutions were available by the early 1970s on such platforms as Multics (on GE hardware), Cambridge CTSS, and the earliest UNIX ports (on DEC hardware). Yet, the “data center” model where users submitted jobs to operators to run on IBM mainframes was overwhelmingly predominant.

In the 1990s, telecommunications companies, who previously offered primarily dedicated point-to-point data circuits, began offering virtual private network (VPN) services with comparable quality of service, but at a lower cost. By switching traffic as they saw fit to balance server use, they could use overall network bandwidth more effectively. They began to use the cloud symbol to denote the demarcation point between what the provider was responsible for and what users were responsible for. Cloud computing extended this boundary to cover all servers as well as the network infrastructure. As computers became more diffused, scientists and technologists explored ways to make large-scale computing power available to more users through time-sharing. They experimented with algorithms to optimize the infrastructure, platform, and applications to prioritize CPUs and increase efficiency for end users.

The use of the cloud metaphor for virtualized services dates at least to General Magic in 1994, where it was used to describe the universe of “places” that mobile agents in the Telescript environment could go. As described by Andy Hertzfeld:

“The beauty of Telescript,” says Andy, “is that now, instead of just having a device to program, we now have the entire Cloud out there, where a single program can go and travel to many different sources of information and create sort of a virtual service.

The use of the cloud metaphor is credited to General Magic communications employee David Hoffman, based on long-standing use in networking and telecom. In addition to use by General Magic itself, it was also used in promoting AT&T’s associated PersonaLink Services.

2000s

Cloud computing has been in existence since 2000.

In August 2006, Amazon created subsidiary Amazon Web Services and introduced its Elastic Compute Cloud (EC2).

In April 2008, Google released Google App Engine in beta.

In early 2008, NASA’s OpenNebula, enhanced in the RESERVOIR European Commission-funded project, became the first open-source software for deploying private and hybrid clouds, and for the federation of clouds.

By mid-2008, Gartner saw an opportunity for cloud computing “to shape the relationship among consumers of IT services, those who use IT services and those who sell them” and observed that “organizations are switching from company-owned hardware and software assets to per-use service-based models” so that the “projected shift to computing … will result in dramatic growth in IT products in some areas and significant reductions in other areas.”

In 2008, the U.S. National Science Foundation began the Cluster Exploratory program to fund academic research using Google-IBM cluster technology to analyze massive amounts of data.

2010s

In February 2010, Microsoft released Microsoft Azure, which was announced in October 2008.

In July 2010, Rackspace Hosting and NASA jointly launched an open-source cloud-software initiative known as OpenStack. The OpenStack project intended to help organizations offering cloud-computing services running on standard hardware. The early code came from NASA’s Nebula platform as well as from Rackspace’s Cloud Files platform. As an open source offering and along with other open-source solutions such as CloudStack, Ganeti and OpenNebula, it has attracted attention by several key communities. Several studies aim at comparing these open sources offerings based on a set of criteria.

On March 1, 2011, IBM announced the IBM SmartCloud framework to support Smarter Planet. Among the various components of the Smarter Computing foundation, cloud computing is a critical part. On June 7, 2012, Oracle announced the Oracle Cloud. This cloud offering is poised to be the first to provide users with access to an integrated set of IT solutions, including the Applications (SaaS), Platform (PaaS), and Infrastructure (IaaS) layers.

In May 2012, Google Compute Engine was released in preview, before being rolled out into General Availability in December 2013.

In 2019, it was revealed that Linux is most used on Microsoft Azure (not just “dominant elsewhere).

Similar Concepts

The goal of cloud computing is to allow users to take benefit from all of these technologies, without the need for deep knowledge about or expertise with each one of them. The cloud aims to cut costs, and helps the users focus on their core business instead of being impeded by IT obstacles. The main enabling technology for cloud computing is virtualization. Virtualization software separates a physical computing device into one or more “virtual” devices, each of which can be easily used and managed to perform computing tasks. With operating system–level virtualization essentially creating a scalable system of multiple independent computing devices, idle computing resources can be allocated and used more efficiently. Virtualization provides the agility required to speed up IT operations, and reduces cost by increasing infrastructure utilization. Autonomic computing automates the process through which the user can provision resources on-demand. By minimizing user involvement, automation speeds up the process, reduces labor costs and reduces the possibility of human errors.

Cloud computing uses concepts from utility computing to provide metrics for the services used. Cloud computing attempts to address QoS (quality of service) and reliability problems of other grid computing models.

Cloud computing shares characteristics with:

  • Client–server model—Client–server computing refers broadly to any distributed application that distinguishes between service providers (servers) and service requestors (clients).
  • Computer bureau—A service bureau providing computer services, particularly from the 1960s to 1980s.
  • Grid computing—A form of distributed and parallel computing, whereby a ‘super and virtual computer’ is composed of a cluster of networked, loosely coupled computers acting in concert to perform very large tasks.
  • Fog computing—Distributed computing paradigm that provides data, compute, storage and application services closer to client or near-user edge devices, such as network routers. Furthermore, fog computing handles data at the network level, on smart devices and on the end-user client side (e.g. mobile devices), instead of sending data to a remote location for processing.
  • Mainframe computer—Powerful computers used mainly by large organizations for critical applications, typically bulk data processing such as: census; industry and consumer statistics; police and secret intelligence services; enterprise resource planning; and financial transaction processing.
  • Utility computing—The “packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility, such as electricity.”
  • Peer-to-peer—A distributed architecture without the need for central coordination. Participants are both suppliers and consumers of resources (in contrast to the traditional client–server model).
  • Green computing
  • Cloud sandbox—A live, isolated computer environment in which a program, code or file can run without affecting the application in which it runs.

Characteristics

Cloud computing exhibits the following key characteristics:

  • Agility for organizations may be improved, as cloud computing may increase users’ flexibility with re-provisioning, adding, or expanding technological infrastructure resources.
  • Cost reductions are claimed by cloud providers. A public-cloud delivery model converts capital expenditures (e.g., buying servers) to operational expenditure.This purportedly lowers barriers to entry, as infrastructure is typically provided by a third party and need not be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is “fine-grained”, with usage-based billing options. As well, less in-house IT skills are required for implementation of projects that use cloud computing. The e-FISCAL project’s state-of-the-art repository contains several articles looking into cost aspects in more detail, most of them concluding that costs savings depend on the type of activities supported and the type of infrastructure available in-house.
  • Device and location independence enable users to access systems using a web browser regardless of their location or what device they use (e.g., PC, mobile phone). As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet, users can connect to it from anywhere.
  • Maintenance of cloud computing applications is easier, because they do not need to be installed on each user’s computer and can be accessed from different places (e.g., different work locations, while travelling, etc.).
  • Multitenancy enables sharing of resources and costs across a large pool of users thus allowing for:
    • Centralization of infrastructure in locations with lower costs (such as real estate, electricity, etc.)
    • Peak-load capacity increases (users need not engineer and pay for the resources and equipment to meet their highest possible load-levels)
    • Utilisation and efficiency improvements for systems that are often only 10–20% utilized.
  • Performance is monitored by IT experts from the service provider, and consistent and loosely coupled architectures are constructed using web services as the system interface.
  • Productivity may be increased when multiple users can work on the same data simultaneously, rather than waiting for it to be saved and emailed. Time may be saved as information does not need to be re-entered when fields are matched, nor do users need to install application software upgrades to their computer.
  • Reliability improves with the use of multiple redundant sites, which makes well-designed cloud computing suitable for business continuity and disaster recovery.
  • Scalability and elasticity via dynamic (“on-demand”) provisioning of resources on a fine-grained, self-service basis in near real-time[53][54] (Note, the VM startup time varies by VM type, location, OS and cloud providers), without users having to engineer for peak loads. This gives the ability to scale up when the usage need increases or down if resources are not being used.[58] Emerging approaches for managing elasticity include the utilization of machine learning techniques to propose efficient elasticity models.
  • Security can improve due to centralization of data, increased security-focused resources, etc., but concerns can persist about loss of control over certain sensitive data, and the lack of security for stored kernels. Security is often as good as or better than other traditional systems, in part because service providers are able to devote resources to solving security issues that many customers cannot afford to tackle or which they lack the technical skills to address. However, the complexity of security is greatly increased when data is distributed over a wider area or over a greater number of devices, as well as in multi-tenant systems shared by unrelated users. In addition, user access to security audit logs may be difficult or impossible. Private cloud installations are in part motivated by users’ desire to retain control over the infrastructure and avoid losing control of information security.

The National Institute of Standards and Technology’s definition of cloud computing identifies “five essential characteristics”:

On-demand self-service. A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider.

Broad network access. Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations).

Resource pooling. The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand.

Rapid elasticity. Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear unlimited and can be appropriated in any quantity at any time.

Measured service. Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.

— National Institute of Standards and Technology

Service Models

Though service-oriented architecture advocates “everything as a service” (with the acronyms EaaS or XaaS, or simply aas), cloud-computing providers offer their “services” according to different models, of which the three standard models per NIST are Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). These models offer increasing abstraction; they are thus often portrayed as a layers in a stack: infrastructure-, platform- and software-as-a-service, but these need not be related. For example, one can provide SaaS implemented on physical machines (bare metal), without using underlying PaaS or IaaS layers, and conversely one can run a program on IaaS and access it directly, without wrapping it as SaaS.

Infrastructure as a service (IaaS)

“Infrastructure as a service” (IaaS) refers to online services that provide high-level APIs used to dereference various low-level details of underlying network infrastructure like physical computing resources, location, data partitioning, scaling, security, backup etc. A hypervisor runs the virtual machines as guests. Pools of hypervisors within the cloud operational system can support large numbers of virtual machines and the ability to scale services up and down according to customers’ varying requirements. Linux containers run in isolated partitions of a single Linux kernel running directly on the physical hardware. Linux cgroups and namespaces are the underlying Linux kernel technologies used to isolate, secure and manage the containers. Containerisation offers higher performance than virtualization, because there is no hypervisor overhead. Also, container capacity auto-scales dynamically with computing load, which eliminates the problem of over-provisioning and enables usage-based billing. IaaS clouds often offer additional resources such as a virtual-machine disk-image library, raw block storage, file or object storage, firewalls, load balancers, IP addresses, virtual local area networks (VLANs), and software bundles.

The NIST’s definition of cloud computing describes IaaS as “where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications; and possibly limited control of select networking components (e.g., host firewalls).”

IaaS-cloud providers supply these resources on-demand from their large pools of equipment installed in data centers. For wide-area connectivity, customers can use either the Internet or carrier clouds (dedicated virtual private networks). To deploy their applications, cloud users install operating-system images and their application software on the cloud infrastructure. In this model, the cloud user patches and maintains the operating systems and the application software. Cloud providers typically bill IaaS services on a utility computing basis: cost reflects the amount of resources allocated and consumed.

Platform as a service (PaaS)

The NIST’s definition of cloud computing defines Platform as a Service as:

The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages, libraries, services, and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly configuration settings for the application-hosting environment.

PaaS vendors offer a development environment to application developers. The provider typically develops toolkit and standards for development and channels for distribution and payment. In the PaaS models, cloud providers deliver a computing platform, typically including operating system, programming-language execution environment, database, and web server. Application developers develop and run their software on a cloud platform instead of directly buying and managing the underlying hardware and software layers. With some PaaS, the underlying computer and storage resources scale automatically to match application demand so that the cloud user does not have to allocate resources manually.

Some integration and data management providers also use specialized applications of PaaS as delivery models for data. Examples include iPaaS (Integration Platform as a Service) and dPaaS (Data Platform as a Service). iPaaS enables customers to develop, execute and govern integration flows. Under the iPaaS integration model, customers drive the development and deployment of integrations without installing or managing any hardware or middleware. dPaaS delivers integration—and data-management—products as a fully managed service. Under the dPaaS model, the PaaS provider, not the customer, manages the development and execution of programs by building data applications for the customer. dPaaS users access data through data-visualization tools. Platform as a Service (PaaS) consumers do not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but have control over the deployed applications and possibly configuration settings for the application-hosting environment.

Software as a service (SaaS)

The NIST’s definition of cloud computing defines Software as a Service as:

The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through either a thin client interface, such as a web browser (e.g., web-based email), or a program interface. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.

In the software as a service (SaaS) model, users gain access to application software and databases. Cloud providers manage the infrastructure and platforms that run the applications. SaaS is sometimes referred to as “on-demand software” and is usually priced on a pay-per-use basis or using a subscription fee. In the SaaS model, cloud providers install and operate application software in the cloud and cloud users access the software from cloud clients. Cloud users do not manage the cloud infrastructure and platform where the application runs. This eliminates the need to install and run the application on the cloud user’s own computers, which simplifies maintenance and support. Cloud applications differ from other applications in their scalability—which can be achieved by cloning tasks onto multiple virtual machines at run-time to meet changing work demand. Load balancers distribute the work over the set of virtual machines. This process is transparent to the cloud user, who sees only a single access-point. To accommodate a large number of cloud users, cloud applications can be multitenant, meaning that any machine may serve more than one cloud-user organization.

The pricing model for SaaS applications is typically a monthly or yearly flat fee per user, so prices become scalable and adjustable if users are added or removed at any point. Proponents claim that SaaS gives a business the potential to reduce IT operational costs by outsourcing hardware and software maintenance and support to the cloud provider. This enables the business to reallocate IT operations costs away from hardware/software spending and from personnel expenses, towards meeting other goals. In addition, with applications hosted centrally, updates can be released without the need for users to install new software. One drawback of SaaS comes with storing the users’ data on the cloud provider’s server. As a result,[citation needed] there could be unauthorized access to the data.

Mobile “Backend” as a Service (MBaaS)

In the mobile “backend” as a service (m) model, also known as backend as a service (BaaS), web app and mobile app developers are provided with a way to link their applications to cloud storage and cloud computing services with application programming interfaces (APIs) exposed to their applications and custom software development kits (SDKs). Services include user management, push notifications, integration with social networking services and more. This is a relatively recent model in cloud computing, with most BaaS startups dating from 2011 or later but trends indicate that these services are gaining significant mainstream traction with enterprise consumers.

Serverless Computing

Serverless computing is a cloud computing code execution model in which the cloud provider fully manages starting and stopping virtual machines as necessary to serve requests, and requests are billed by an abstract measure of the resources required to satisfy the request, rather than per virtual machine, per hour. Despite the name, it does not actually involve running code without servers. Serverless computing is so named because the business or person that owns the system does not have to purchase, rent or provision servers or virtual machines for the back-end code to run on.

Function as a Service (FaaS)

Function as a service (FaaS) is a service-hosted remote procedure call that leverages serverless computing to enable the deployment of individual functions in the cloud that run in response to events. FaaS is included under the broader term serverless computing, but the terms may also be used interchangeably.

Deployment Models

Cloud computing types

Private Cloud

Private cloud is cloud infrastructure operated solely for a single organization, whether managed internally or by a third party, and hosted either internally or externally. Undertaking a private cloud project requires significant engagement to virtualize the business environment, and requires the organization to reevaluate decisions about existing resources. It can improve business, but every step in the project raises security issues that must be addressed to prevent serious vulnerabilities. Self-run data centers are generally capital intensive. They have a significant physical footprint, requiring allocations of space, hardware, and environmental controls. These assets have to be refreshed periodically, resulting in additional capital expenditures. They have attracted criticism because users “still have to buy, build, and manage them” and thus do not benefit from less hands-on management, essentially “[lacking] the economic model that makes cloud computing such an intriguing concept”.

Public Cloud

A cloud is called a “public cloud” when the services are rendered over a network that is open for public use. Public cloud services may be free. Technically there may be little or no difference between public and private cloud architecture, however, security consideration may be substantially different for services (applications, storage, and other resources) that are made available by a service provider for a public audience and when communication is effected over a non-trusted network. Generally, public cloud service providers like Amazon Web Services (AWS), IBM, Oracle, Microsoft and Google own and operate the infrastructure at their data center and access is generally via the Internet. AWS, Oracle, Microsoft, and Google also offer direct connect services called “AWS Direct Connect”, “Oracle FastConnect”, “Azure ExpressRoute”, and “Cloud Interconnect” respectively, such connections require customers to purchase or lease a private connection to a peering point offered by the cloud provider.

Hybrid Cloud

Hybrid cloud is a composition of two or more clouds (private, community or public) that remain distinct entities but are bound together, offering the benefits of multiple deployment models. Hybrid cloud can also mean the ability to connect collocation, managed and/or dedicated services with cloud resources. Gartner defines a hybrid cloud service as a cloud computing service that is composed of some combination of private, public and community cloud services, from different service providers. A hybrid cloud service crosses isolation and provider boundaries so that it can’t be simply put in one category of private, public, or community cloud service. It allows one to extend either the capacity or the capability of a cloud service, by aggregation, integration or customization with another cloud service.

Varied use cases for hybrid cloud composition exist. For example, an organization may store sensitive client data in house on a private cloud application, but interconnect that application to a business intelligence application provided on a public cloud as a software service. This example of hybrid cloud extends the capabilities of the enterprise to deliver a specific business service through the addition of externally available public cloud services. Hybrid cloud adoption depends on a number of factors such as data security and compliance requirements, level of control needed over data, and the applications an organization uses.

Another example of hybrid cloud is one where IT organizations use public cloud computing resources to meet temporary capacity needs that can not be met by the private cloud. This capability enables hybrid clouds to employ cloud bursting for scaling across clouds. Cloud bursting is an application deployment model in which an application runs in a private cloud or data center and “bursts” to a public cloud when the demand for computing capacity increases. A primary advantage of cloud bursting and a hybrid cloud model is that an organization pays for extra compute resources only when they are needed. Cloud bursting enables data centers to create an in-house IT infrastructure that supports average workloads, and use cloud resources from public or private clouds, during spikes in processing demands. The specialized model of hybrid cloud, which is built atop heterogeneous hardware, is called “Cross-platform Hybrid Cloud”. A cross-platform hybrid cloud is usually powered by different CPU architectures, for example, x86-64 and ARM, underneath. Users can transparently deploy and scale applications without knowledge of the cloud’s hardware diversity. This kind of cloud emerges from the rise of ARM-based system-on-chip for server-class computing.

Others

Community Cloud

Community cloud shares infrastructure between several organizations from a specific community with common concerns (security, compliance, jurisdiction, etc.), whether managed internally or by a third-party, and either hosted internally or externally. The costs are spread over fewer users than a public cloud (but more than a private cloud), so only some of the cost savings potential of cloud computing are realized.

Distributed Cloud

A cloud computing platform can be assembled from a distributed set of machines in different locations, connected to a single network or hub service. It is possible to distinguish between two types of distributed clouds: public-resource computing and volunteer cloud.

  • Public-resource computing—This type of distributed cloud results from an expansive definition of cloud computing, because they are more akin to distributed computing than cloud computing. Nonetheless, it is considered a sub-class of cloud computing.
  • Volunteer cloud—Volunteer cloud computing is characterized as the intersection of public-resource computing and cloud computing, where a cloud computing infrastructure is built using volunteered resources. Many challenges arise from this type of infrastructure, because of the volatility of the resources used to built it and the dynamic environment it operates in. It can also be called peer-to-peer clouds, or ad-hoc clouds. An interesting effort in such direction is Cloud @ Home, it aims to implement a cloud computing infrastructure using volunteered resources providing a business-model to incentivize contributions through financial restitution.

Multicloud

Multicloud is the use of multiple cloud computing services in a single heterogeneous architecture to reduce reliance on single vendors, increase flexibility through choice, mitigate against disasters, etc. It differs from hybrid cloud in that it refers to multiple cloud services, rather than multiple deployment modes (public, private, legacy).

Big Data Cloud

The issues of transferring large amounts of data to the cloud as well as data security once the data is in the cloud initially hampered adoption of cloud for big data, but now that much data originates in the cloud and with the advent of bare-metal servers, the cloud has become a solution for use cases including business analytics and geospatial analysis.

HPC cloud

HPC cloud refers to the use of cloud computing services and infrastructure to execute high-performance computing (HPC) applications. These applications consume considerable amount of computing power and memory and are traditionally executed on clusters of computers. In 2016 a handful of companies, including R-HPC, Amazon Web Services, Univa, Silicon Graphics International, Sabalcore, Gomput, and Penguin Computing offered a high performance computing cloud. The Penguin On Demand (POD) cloud was one of the first non-virtualized remote HPC services offered on a pay-as-you-go basis. Penguin Computing launched its HPC cloud in 2016 as alternative to Amazon’s EC2 Elastic Compute Cloud, which uses virtualized computing nodes.

Architecture

Cloud computing sample architecture

Cloud architecture, the systems architecture of the software systems involved in the delivery of cloud computing, typically involves multiple cloud components communicating with each other over a loose coupling mechanism such as a messaging queue. Elastic provision implies intelligence in the use of tight or loose coupling as applied to mechanisms such as these and others.

Cloud Engineering

Cloud engineering is the application of engineering disciplines to cloud computing. It brings a systematic approach to the high-level concerns of commercialization, standardization, and governance in conceiving, developing, operating and maintaining cloud computing systems. It is a multidisciplinary method encompassing contributions from diverse areas such as systems, software, web, performance, information technology engineering, security, platform, risk, and quality engineering.

Security and Privacy

Cloud computing poses privacy concerns because the service provider can access the data that is in the cloud at any time. It could accidentally or deliberately alter or delete information. Many cloud providers can share information with third parties if necessary for purposes of law and order without a warrant. That is permitted in their privacy policies, which users must agree to before they start using cloud services. Solutions to privacy include policy and legislation as well as end users’ choices for how data is stored. Users can encrypt data that is processed or stored within the cloud to prevent unauthorized access. Identity management systems can also provide practical solutions to privacy concerns in cloud computing. These systems distinguish between authorized and unauthorized users and determine the amount of data that is accessible to each entity. The systems work by creating and describing identities, recording activities, and getting rid of unused identities.

According to the Cloud Security Alliance, the top three threats in the cloud are Insecure Interfaces and API’s, Data Loss & Leakage, and Hardware Failure—which accounted for 29%, 25% and 10% of all cloud security outages respectively. Together, these form shared technology vulnerabilities. In a cloud provider platform being shared by different users there may be a possibility that information belonging to different customers resides on same data server. Additionally, Eugene Schultz, chief technology officer at Emagined Security, said that hackers are spending substantial time and effort looking for ways to penetrate the cloud. “There are some real Achilles’ heels in the cloud infrastructure that are making big holes for the bad guys to get into”. Because data from hundreds or thousands of companies can be stored on large cloud servers, hackers can theoretically gain control of huge stores of information through a single attack—a process he called “hyperjacking”. Some examples of this include the Dropbox security breach, and iCloud 2014 leak. Dropbox had been breached in October 2014, having over 7 million of its users passwords stolen by hackers in an effort to get monetary value from it by Bitcoins (BTC). By having these passwords, they are able to read private data as well as have this data be indexed by search engines (making the information public).

There is the problem of legal ownership of the data (If a user stores some data in the cloud, can the cloud provider profit from it?). Many Terms of Service agreements are silent on the question of ownership. Physical control of the computer equipment (private cloud) is more secure than having the equipment off site and under someone else’s control (public cloud). This delivers great incentive to public cloud computing service providers to prioritize building and maintaining strong management of secure services. Some small businesses that don’t have expertise in IT security could find that it’s more secure for them to use a public cloud. There is the risk that end users do not understand the issues involved when signing on to a cloud service (persons sometimes don’t read the many pages of the terms of service agreement, and just click “Accept” without reading). This is important now that cloud computing is becoming popular and required for some services to work, for example for an intelligent personal assistant (Apple’s Siri or Google Now). Fundamentally, private cloud is seen as more secure with higher levels of control for the owner, however public cloud is seen to be more flexible and requires less time and money investment from the user.

Limitations and Disadvantages

According to Bruce Schneier, “The downside is that you will have limited customization options. Cloud computing is cheaper because of economics of scale, and—like any outsourced task—you tend to get what you get. A restaurant with a limited menu is cheaper than a personal chef who can cook anything you want. Fewer options at a much cheaper price: it’s a feature, not a bug.” He also suggests that “the cloud provider might not meet your legal needs” and that businesses need to weigh the benefits of cloud computing against the risks. In cloud computing, the control of the back end infrastructure is limited to the cloud vendor only. Cloud providers often decide on the management policies, which moderates what the cloud users are able to do with their deployment. Cloud users are also limited to the control and management of their applications, data and services. This includes data caps, which are placed on cloud users by the cloud vendor allocating certain amount of bandwidth for each customer and are often shared among other cloud users.

Privacy and confidentiality are big concerns in some activities. For instance, sworn translators working under the stipulations of an NDA, might face problems regarding sensitive data that are not encrypted.

Cloud computing is beneficial to many enterprises; it lowers costs and allows them to focus on competence instead of on matters of IT and infrastructure. Nevertheless, cloud computing has proven to have some limitations and disadvantages, especially for smaller business operations, particularly regarding security and downtime. Technical outages are inevitable and occur sometimes when Cloud Service Providers (CSPs) become overwhelmed in the process of serving their clients. This may result to temporary business suspension. Since this technology’s systems rely on the internet, an individual cannot be able to access their applications, server or data from the cloud during an outage.

Emerging Trends

Cloud computing is still a subject of research. A driving factor in the evolution of cloud computing has been chief technology officers seeking to minimize risk of internal outages and mitigate the complexity of housing network and computing hardware in-house. Major cloud technology companies invest billions of dollars per year in cloud Research and Development. For example, in 2011 Microsoft committed 90 percent of its $9.6 billion R&D budget to its cloud. Research by investment bank Centaur Partners in late 2015 forecasted that SaaS revenue would grow from $13.5 billion in 2011 to $32.8 billion in 2016.

Digital Forensics in the Cloud

The issue of carrying out investigations where the cloud storage devices cannot be physically accessed has generated a number of changes to the way that digital evidence is located and collected. New process models have been developed to formalize collection.

In some scenarios existing digital forensics tools can be employed to access cloud storage as networked drives (although this is a slow process generating a large amount of internet traffic).

An alternative approach is to deploy a tool that processes in the cloud itself.

For organizations using Office 365 with an ‘E5’ subscription there is the option to use Microsoft’s built-in ediscovery resources, although these do not provide all the functionality that is typically required for a forensic process.

The Internet of Things outlook for 2014: Everything connected and communicating

The Internet of Things is more than Internet-connected refrigerators and shoes that tweet; it’s a new wave of enabling devices to become more ‘intelligent’ and our chance to become better informed about our businesses and the world around us.

By Ken Hess for Consumerization: BYOD | January 10, 2014 — 13:00 GMT (21:00 GMT+08:00) | Topic: Tapping M2M: The Internet of Things

Kevin Ashton, a British technology pioneer who co-founded the Auto-ID Center at MIT, which created a global standard system for RFID and other sensors, coined the phrase “Internet of Things” back in 1999. His Internet of Things (IoT) is a system where the Internet is connected to the physical world via ubiquitous sensors. And sensors can be any device that gathers data and reports that data to a data collection facility such as a data warehouse, a database, or log server.

IoT isn’t just a fancy buzzword that describes how your refrigerator can let you know when you need to replace your spoiling milk or your rotting vegetables (although it can), it is so much more. How much more is only left to your imagination and to your budget. You can do as little or as much with IoT as you want. For example, if you operate food distribution business, you could install sensors in your trucks that send temperature, humidity, and dock-to-dock travel times back to your home office for analysis. You can also more accurately track the exact expense required to deliver each food product or container to the customer.

The Internet of Things is not just about gathering of data but also about the analysis and use of data.

My best example of gathering and analysis of IoT data is the first instance of such a system: The Coke Machine at Carnegie-Mellon University’s Computer Science department, also known as the Internet Coke Machine.

One of the computer science students in 1982, David Nichols, had the original idea to poll the Coke machine so that he didn’t waste a trip to the machine to find it empty. He and a group of fellow students (Mike Kazar (Server Software), David Nichols (Documentation and User Software), John Zsarnay (Hardware), Ivor Durham (Finger interface) together to create this now famous connected vending machine.

From their labs, they could check the status of the sodas in the vending machine. I’m pretty sure they didn’t realize the international effect this would someday have when they devised their plan. Nor did they realize that anyone beyond themselves would care*.

It doesn’t matter that they were trying to save steps or that they were only trying to monitor the status of their favorite bubbly beverages**. But what really matters is that they did it. And they used the data. Their little experiment changed the way we look at “things” and the data that they can produce.

But serious IoT is coming to the world in a big way and has far reaching implications for big data, security, and cloud computing.

Big Data

So called “big” data is a buzzword that seems to eminate from the most unusual places these days. Mostly from the mouths and fingertips of people who haven’t a clue of what it means. What IoT means for big data is that the data from all these “things” has to be stored and analyzed. That is big data. If you look at some of the projections for the next few years, you’ll have an idea of what I mean.

Internet-connected cars, sensors on raw food products, sensors on packages of all kinds, data streaming in from the unlikeliest of places: restrooms, kitchens, televisions, personal mobile devices, cars, gasoline pumps, car washes, refigerators, vending machines, and SCADA systems for example will generate a lot of data (big data).

Security

Lots of devices chattering away to centralized databases also means that someone needs to watch the machine-to-machine (M2M) communications. Security is a major issue with IoT. However, several companies including Wind River have made great advances in IoT and M2M security.

Unfortunately, security for IoT is multilayered and expensive to implement. Strong security must exist in the three vulnerable layers: physical, network, and data. By physical, I mean the device itself must be secured with locks, tamper-proof housings, alarms, or out-of-reach placement. Physical security is a primary problem with IoT. Devices that are easily stolen or broken into pose the biggest threats.

Network communications must be secured by VPN or other form of encryption. Man-in-the-middle attacks are common for such devices and manufacturers need to make it difficult for would be attackers.

Data security poses a problem as well. First, there’s “data at rest” that’s stored locally on the device. Compromise of this information could proved detrimental to the rest of the network because it could reveal other device locations, network topology, server names, and even usernames and passwords. All data at rest should be encrypted to prevent this type of breach.

Second, there’s “data on the move” or “data in motion” which is covered in part by encrypted communications but what happens to the data after it lands on a target device, such as a data center server is also important. And the transfer of that data across a network should also be encrypted.

Encrypted devices, encrypted communications over the entire data path, and hardened physical devices make it very difficult to extract value from any recovered information. In fact, the purpose of this multilayered security is to make it far more expensive to glean usable data than the data itself would yield to the criminal or malicious hacker.

Cloud Computing

You might wonder how cloud computing fits into the IoT world because in the years before cloud computing we did just fine by having our devices report directly to a home server. Nowadays there’s so much more data to deal with from disparate sources that cloud computing can play a significant role in IoT scenarios.

For example, if you have a chain of restaurants spread out over a wide geographic area or worldwide, then your data streams in on a continuous basis. There’s never a good time for taking your services offline for maintenance. This is where cloud computing comes to the rescue.

Your ‘things’ can collect data 100 percent of the time with no breaks in service. If you purchase cloud storage, you can filter the data for extracted offload at your convenience. To me, IoT and cloud computing are the perfect technology marriage.

You won’t have to keep your ear too close to the ground in 2014 to hear about IoT. If you do, you’re just not listening. IoT isn’t a marketing term or tech buzzword, it’s a real thing. You should learn about it and how it can help your company learn more about itself. Seriously.

If you’re losing money on a particular part of your business, then IoT might resolve it for you with better controls, better tracking, and better reporting. Should security, big data, and the cloud computing connection prove to be too overwhelming for you, connect up with a company that knows something about IoT. And if you still don’t know where to start, just ask me by using the Author Contact Form.

What do you think about IoT and what it can do for your company? Talk back and let me know.

*If you care, you can read the recollected story from David Nichols and others.

**Admittedly, it would have been cool to do this with a Coke machine but it would have been far more enticing to me, if they’d also hooked up the snack machine to check the availability of Rice Krispies treats or gum. I love gum. I’m a gum freak. You’ve never seen anyone chew gum like I chew gum. I hope no one ever places gum on the list of endangered things. I might go unhappily extinct.

Mari Mengenal Apa itu Internet of Thing (IoT)

Bicara mengenai Internet of Thing yang biasa disebut dengan IoT tidak ada habisnya karena Internet of Things tidak mempunyai definisi tetap selalu ada saja bahasan entah itu berasal dari suatu keseharian kita hingga benda-benda yang dapt dijadikan perangkat untuk mempermudah aktivitas kita. Namun kita dapat menentukan apakah suatu perangkat merupakan bagian dari IoT atau tidak dengan pertanyaan berikut ini: Apakah produk suatu vendor dapat bekerja dengan produk dari vendor yang lain? Dapatkah suatu kunci pintu dari vendor A berkomunikasi dengan saklar lampu dari vendor B, dan bagaimana jika seorang pengguna ingin memasukkan termostatnya menjadi bagian dari komunikasi tersebut.

Jadi Internet of Thing (IoT) adalah sebuah konsep dimana suatu objek yang memiliki kemampuan untuk mentransfer data melalui jaringan tanpa memerlukan interaksi manusia ke manusia atau manusia ke komputer. IoT telah berkembang dari konvergensi teknologi nirkabel, micro-electromechanical systems (MEMS), dan Internet.

“A Things” pada Internet of Things dapat didefinisikan sebagai subjek misalkan orang dengan monitor implant jantung, hewan peternakan dengan transponder biochip, sebuah mobil yang telah dilengkapi built-in sensor untuk memperingatkan pengemudi ketika tekanan ban rendah. Sejauh ini, IoT paling erat hubungannya dengan komunikasi machine-to-machine (M2M) di bidang manufaktur dan listrik, perminyakkan, dan gas. Produk dibangun dengan kemampuan komunikasi M2M yang sering disebut dengan sistem cerdas atau “smart”. Sebagai contoh yaitu smart kabel, smart meter, smart grid sensor.

Penelitian pada IoT masih dalam tahap perkembangan. Oleh karena itu, tidak ada definisi dari Internet of Things. Berikut adalah beberapa definisi alternatif dikemukakan untuk memahami Internet of Things (IoT), antara lain (id.wikipedia.org):

Menurut Ashton pada tahun 2009 definisi awal IoT adalah Internet of Things memiliki potensi untuk mengubah dunia seperti pernah dilakukan oleh Internet, bahkan mungkin lebih baik. Pernyataan tersebut diambil dari artikel sebagai berikut:

“Hari ini komputer dan manusia, hampir sepenuhnya tergantung pada Internet untuk segala informasi yang semua terdiri dari sekitar 50 petabyte (satu petabyte adalah 1.024 terabyte) data yang tersedia pada Internet dan pertama kali digagas dan diciptakan oleh manusia. Dari mulai magnetik, menakan tombol rekam, mengambil gambar digital atau memadai kode bar.

Diagram konvensional dari Internet meninggalkan router menjadi bagian terpenting dari semuanya. Masalahanya adalah orang memiliki waktu, perhatian dan akurasi terbatas. Mereka semua berarti tidak sangat baik dalam menangkap berbagai data tentang hal di dunia nyata.

Dari segi fisik dan begitu juga lingkungan kita. Gagasan dan informai begitu penting, tetapi banyak lagi hal yang pernting. Namun teknologi informasi saat ini sangat tergantung pada data yang berasal dari orang-orang sehingga komputer kita tahu lebih banyak tentang semua ide dari hal-hal tersebut”

Menurut Casagras (Coordinator and support action for global RFID-related activities and standadisation) mendefinisikan IoT sebagai sebuah infrastruktur jaringan global, yang menghubungkan benda-benda fisik dan virtual melalui eksploitasi data capture dan kemampuan komunikasi. Infrastruktur terdiri dari jaringan yang telah ada dan internet berikut pengembangan jaringannya. Semua ini akan menawarkan identifikasi obyek, sensor dan kemampuan koneksi sebagai dasar untuk pengembangan layanan dan aplikasi ko-operatif yang independen. Ia juga ditandai dengan tingkat otonom data capture yang tinggi, event transfer, konektivitas jaringan dan interoperabilitas.

Source: idcloudhost | 17 JULI 2016