Wednesday, 10 November 2021

Pasha (Mean)

Pasha

Pasha, previously known as bashaw, was a prestigious rank within the political and military system of the Ottoman Empire. It was usually bestowed upon governors, generals, dignitaries, and other high-ranking officials. The title of Pasha was considered an honorific, and it came in various ranks, comparable to the British title of Lord. In pre-republican Egypt, Pasha was also one of the most esteemed titles. The Pasha rank had three levels, with the highest being the first class, which allowed the holder to bear a standard with three horse-tails. The second class was allowed two, and the third class was permitted one.
PashaThe English word "pasha" comes from the Turkish word "paşa," which is believed to have Persian and Turkish roots. Some scholars connect it with the Persian word "pādšā" meaning "king," while others connect it with the Turkish words "baş(-ı)" meaning "head," or "baş-ağa," the title of an official. However, etymologist Sevan Nişanyan argues that "pasha" comes from the Turkish word "beşe," meaning "boy" or "prince," which is derived from the Persian word "baxxe." In Old Turkish, there was no distinction between the sounds /b/ and /p/, and the word was spelled "başa" until the 15th century.
Initially, the title "pasha" was used in Western Europe with the initial "b," and English forms such as "bashaw," "bassaw," and "buch" were common in the 16th and 17th centuries, which were derived from the medieval Latin and Italian word "bassa." In Arabic-speaking regions, due to the Ottoman Empire's presence, the title became frequently used in Arabic, although pronounced as "basha" due to the absence of the sound "p" in Arabic. The title "pasha" was initially applied only to military commanders, but later it became a title for any high official or anyone the court desired to honor, including unofficial persons.
The title was spelled with the initial "b" when first used in western Europe, appearing as bashaw, bassaw, or bucha, and was commonly used in the Arab World due to Ottoman presence, pronounced as basha in Arabic. Pashas held a higher rank than Beys and Aghas but ranked below Khedives and Viziers. The number of yak- or horse-tails or peacock tails displayed on their standard, a symbol of military authority during campaigns, distinguished the three grades of Pasha. The Sultan was the only person entitled to four tails. The title could be applied to any high official or unofficial person that the court wanted to honor. If a Pasha governed a provincial territory, it could be called a pashaluk, with the administrative term of the jurisdiction designated by terms such as eyalet, vilayet/walayah. Both Beylerbeys and valis/wālis were entitled to the style of Pasha, typically with two tails. Ottoman and Egyptian authorities conferred the title on both Muslims and Christians and frequently gave it to foreigners in their service. The title was an aristocratic honorific and could be hereditary or non-hereditary, stipulated in the Firman issued by the Sultan carrying the imperial seal "Tughra". The title did not bestow rank or title on the wife or elevate any religious leader to the title. Unlike western nobility titles, Ottoman titles followed the given name, and holders of the title Pasha were often referred to as "Your Excellency" in contacts with foreign emissaries and representatives.

In current Egyptian and Levantine Arabic, the title "Pasha" is more similar to "Sir" than "Lord" and is commonly used by older individuals. Among younger generations in Egypt, it is considered an informal way of addressing male peers since the abolition of aristocratic titles after the Revolution of 1952. While it is not an official title, the public and media in Turkey use "Pasha" to refer to general officers in the Turkish Armed Forces.


Thursday, 6 November 2014

X Factor Indonesia

X Factor Indonesia

X Factor Indonesia

The X Factor Indonesia is a reality TV music competition that aims to discover new singing talents in Indonesia. The winner of the competition is awarded 1 billion rupiah and a recording contract with Sony Music Indonesia. The show premiered on December 28, 2012 on RCTI and is the second franchise in Southeast Asia after the Philippines.

Unlike its rival, Indonesian Idol, The X Factor Indonesia is based on the British The X Factor franchise and has several unique features. The competition is open to both individuals and groups, and there is no upper age limit. The judges are assigned one of four categories, which are boys aged 15 to 25, girls aged 15 to 25, individuals aged 26 and over, and groups. During the live shows, the judges mentor their assigned category, helping them with song choices, styling, and staging, while also judging contestants from other categories. The judges also compete to ensure that their act wins the competition, making them the winning judge.

The original judging panel of The X Factor Indonesia consisted of Ahmad Dhani, Rossa, Anggun, and Bebi Romeo, with Robby Purba as the host. Fatin Shidqia is the only winner of the show so far.

Although Indonesian Idol became a massive success and the number one show in Indonesia for seven consecutive seasons, the original UK version, Pop Idol, did not perform well. Simon Cowell, a judge on Pop Idol, wanted to launch a show that he owned the rights to. While the first series of Pop Idol was hugely successful, the second series experienced a drop in viewer figures, and some, including Pop Idol judge Pete Waterman, considered Michelle McManus to be an undeserving winner. Pop Idol was axed in 2004, and Simon Cowell announced a new show, The X Factor, created without the involvement of Idol creator Simon Fuller. The X Factor's ratings were initially average, but by the sixth series in 2009, ratings had reached 10 million viewers per week.

In March 2010, RCTI, the broadcaster of Indonesian Idol, signed a deal to launch the Indonesian version of The X Factor. Initially, X Factor Indonesia was intended to replace Indonesian Idol in 2013, but due to the immense success of the seventh season of Indonesian Idol in 2012, RCTI and FremantleMedia decided to continue collaborating with both shows airing on alternate years. To replicate the success of the seventh season of Indonesian Idol, Fabian Dharmawan from RCTI was appointed as the executive producer for RCTI for the first season of The X Factor Indonesia, while Glenn Sims, the head of entertainment for FremantleMedia, Virgita Ruchiman, and Ken Irawati served as executive producers for FremantleMedia Indonesia.

The X Factor Indonesia started airing short commercials in August 2012, with a second promo featuring various Indonesian artists and One Direction. The show premiered on December 28, 2012, and focuses on identifying singing talent, with appearance, personality, stage presence, and dance routines also playing a role. The competition is open to solo artists and vocal groups aged 15 and above, with no upper age limit. Auditions are held in front of the producers first, and then those who pass are invited to perform for the judges in front of a live audience. The judges' auditions require at least three out of four judges to say "yes" for the auditionee to move forward. There is also an online audition process, where the auditionees can upload their performance on the X Factor Indonesia website and receive votes from viewers on YouTube. The judges act as mentors to their category and help contestants with song choices, styling, and staging while judging contestants from other categories. A selection of the auditions is broadcast over the first few weeks of the show, featuring the best, worst, and most bizarre auditions.

The contestants selected at auditions are further refined through a series of performances at "bootcamp", and then at the "judges' home visits", until a small number eventually progress to the live finals. In the bootcamp, contestants will have to go through a series of challenges until the number of contestants were trimmed down to 26 and divided according to their categories. At the end of bootcamp, the producers will also reveal which category the judges will be mentoring. The judges then disband for the "judges' home visits" round, where they further reduce their acts on location at a residence with the help of a celebrity guest mentor.

The finals consist of a series of two gala live shows, with the first featuring the contestants' performances and the second revealing the results of the public voting. Celebrity guest performers will be featured regularly. The performance show occasionally begins with a group performance from the remaining contestants. The show is primarily concerned with identifying a potential pop star or star group, and singing talent, appearance, personality, stage presence, and dance routines are all important elements of the contestants' performances. In the initial live shows, each act performs once in the first show in front of a studio audience and the judges, usually singing over a pre-recorded backing track. Dancers are also commonly featured. Acts occasionally accompany themselves on guitar or piano. Each live show has a different theme, and each contestant's song is chosen according to the theme. After each act has performed, the judges comment on their performance. Heated disagreements, usually involving judges defending their contestants against criticism, are a regular feature of the show. Once all the acts have appeared, the phone lines open, and the viewing public vote on which act they want to keep. Once the number of contestants has been reduced to six, each act performs twice in the performances show. This continues until only three acts remain. These acts go on to appear in the grand final, which decides the overall winner by public vote.

The two acts polling the fewest votes are revealed. Both these acts have to perform again in a "final showdown," and the judges vote on which of the two to send home. They were able to pick new songs to perform in the "final showdown". Ties are possible as there are four judges voting on which of the two to send home. In the event of a tie, the result goes to deadlock, and the act who came last in the public vote is sent home. The actual number of votes cast for each act isn't revealed, nor even the order. Once the number of contestants has been reduced to five, the act that polled the fewest votes is automatically eliminated from the competition.

The winner of the X Factor Indonesia is awarded a recording contract from Sony Music Indonesia, which would include investments worth 1 billion rupiah, which is claimed as the largest guaranteed prize in Indonesian television history. Several cash rewards from the sponsors, including a new car, is also awarded for the grand finalists in the 1st season.

There were several rumored candidates for the judging panel, such as Indra Lesmana, Titi DJ, Maia Estianty, Vina Panduwinata, Tompi, Anang Hermansyah, Sherina Munaf, Agnes Monica, Ruth Sahanaya, and Iwan Fals. Eventually, Ahmad Dhani, Bebi Romeo, Rossa, and Anggun were confirmed as judges for the show, with Mulan Jameela stepping in for Anggun during auditions due to her ongoing concert tour in Europe. The show had various potential hosts, including VJ Boy William and Daniel Mananta, but Robby Purba was ultimately announced as the host on November 23, 2012.

Each judge is assigned a category to mentor and selects a small group of contestants to advance to the live shows. During season one, Ahmad Dhani and Rossa's decision to eliminate Alex Rudiart instead of Gede Bagus caused public backlash and calls to boycott the show.

On September 29, 2012, Cross Mobile was named the official sponsor of X Factor Indonesia, with an extensive multi-platform marketing partnership both on and off-air. Kopi ABC was announced as the second official sponsor on December 26, while Indosat Mentari was named the third official sponsor on December 28. Oriflame served as the official make-up sponsor, and Procter & Gamble promoted its Pantene, Olay, and Downy brands through the show.


Monday, 1 September 2014

A brief introduction to Dart



A brief introduction to Dart : Dart is a modern programming language developed by Google for web development. It provides developers with a new set of tools and features that make it easier to write modular, structured, and object-oriented applications. With Dart, developers can create both client and server-side applications that are more efficient and scalable.

One of the key benefits of using Dart is its class-based object-oriented programming model, which allows developers to create reusable and extensible code. Dart's syntax is also similar to that of other popular programming languages, such as Java or C#, making it easy for developers to learn and use.

When it comes to client-side development, Dart has its own high-featured library for Document Object Model (DOM) manipulation and event handling. This makes it easier for developers to build complex and responsive web applications. The language also provides optional typing, which can help catch errors at compile-time rather than runtime.

Furthermore, Dart also allows developers to write server-side code. This means that developers can create a homogeneous system that covers both client and server. This is particularly useful for creating web applications that require a lot of server-side processing.

Despite its potential, Dart is still a relatively new language and is not yet widely adopted. As a result, it may not be suitable for use in production environments. However, developers who are interested in broadening their horizons may find it worthwhile to explore the language and try building sample applications.

One of the challenges facing Dart is whether it will be widely supported by popular browsers. While the language provides many benefits, it requires native support from browsers to be widely adopted. Developers will have to wait and see whether major browser developers will implement support for Dart or not. Nonetheless, Dart has the potential to make a significant impact on web development and is definitely worth keeping an eye on.

Sunday, 31 August 2014

Semi-supervised learning : Major varieties of learning problem

Semi-supervised learning : Major varieties of learning problem
Machine learning focuses on five main types of learning problems, with the first four falling under the category of function estimation. These problems can be grouped based on two dimensions: whether the learning task is supervised or unsupervised and whether the variable to be predicted is nominal or real-valued.

The first type of problem is classification, which involves supervised learning of a function f(x) that predicts a nominal value. The function learned is called a classifier, and it determines the class to which an instance x belongs based on its input. For example, the task might involve classifying a word in a sentence based on its part of speech. The learner is given labeled data, which includes instances along with their correct class labels. Using this data, the classifier learns to make predictions for new instances.

The concept of clustering is the unsupervised equivalent to classification. In clustering, the goal is also to assign instances to classes, but the algorithm only has access to the instances themselves, not the correct answers for any of them. The primary difference between classification and clustering is the type of data that is provided to the learner as input, specifically whether it is labeled or not. Two other important function estimation tasks include regression, where the learner estimates a function that takes on real values instead of finite values, and unsupervised learning of a real-valued function, which can be seen as density estimation. In this case, the learner is given an unlabeled set of training data and is tasked with learning a function that assigns a real value to every point in the space. Finally, reinforcement learning is another type of learning where the learner receives a stream of data from sensors and is expected to take actions based on this data. There is also a reward signal that the learner tries to maximize over time. The key differences between reinforcement learning and the other four function estimation settings are the sequential nature of the inputs and the indirect nature of the supervision provided by the reward signal.
 
Semisupervised learning is a form of machine learning that combines elements of both supervised and unsupervised learning. The distinction between these two approaches lies in whether or not the training data is labeled, with supervised learning relying on labeled data to classify and predict outcomes, while unsupervised learning seeks to discover patterns and structure within unlabeled data. In contrast, semisupervised learning involves providing some labeled data to the learner, while leaving the rest unlabeled. This mixed setting is the canonical case for semisupervised learning, and many methods have been developed to take advantage of it.

However, labeled and unlabeled data are not the only ways of providing partial information to the learner about the labels for training data. For instance, a few reliable rules for labeling instances or constraints limiting the candidate labels for specific instances could also be used. These alternative methods of partial labeling are also relevant to semisupervised learning and are often used in practice. While reinforcement learning could also be seen as a form of semisupervised learning because it relies on indirect information about labels, the connection between reinforcement learning and other semisupervised approaches is not well understood and is beyond the scope of this discussion.

Introduction of Semi-supervised Learning for Computational Linguistic

Introduction of Semi-supervised Learning for Computational Linguistic


Creating sufficient labeled data can be very time-consuming. Obtaining the output sequences is not difficult: English texts are available in great quantity. What is time-consum
Introduction of Semi-supervised Learning for Computational Linguistic
Advancements in computational linguistics have resulted in the creation of different algorithms for semisupervised learning, including the Yarowsky algorithm which has gained prominence. These algorithms have been developed specifically to tackle the common problems that arise in computational linguistics. Such problems involve scenarios where there is a correct linguistic answer, a large amount of unlabeled data, and very limited labeled data. In contrast to acoustic modeling, classic unsupervised learning is not suitable for these problems because any way of assigning classes is not acceptable. Although the learning method is mostly unsupervised since most of the data is unlabeled, labeled data is essential because it provides the only characterization of the linguistically correct classes.

The algorithms just mentioned turn out to be very similar to an older learning method known as self-training that was unknown in computational linguistics at the time. For this reason, it is more accurate to say that they were rediscovered, rather than invented, by computational linguists. Until very recently, most prior work on semisupervised learning has been little known even among researchers in the area of machine learning. One goal of the present volume is to make the prior and also the more recent work on semisupervised learning more accessible to computational linguists.

Shortly after the rediscovery of self-training in computational linguistics, a method called co-training was invented by Blum and Mitchell, machinelearning researchers working on text classification. Self-training and co-training have become popular and widely employed in computational linguistics; together they account for all but a fraction of the work on semisupervised learning in the field. We will discuss them in the next chapter. In the remainder of this chapter, we give a broader perspective on semisupervised learning, and lay out the plan of the rest of the book.

Motivation of Semi-supervised Learning


For most learning tasks of interest, it is easy to obtain samples of unlabeled data. For many language learning tasks, for example, the World Wide Web can be seen as a large collection of unlabeled data. By contrast, in most cases, the only practical way to obtain labeled data is to have subject-matter experts manually annotate the data, an expensive and time-consuming process.

The great advantage of unsupervised learning, such as clustering, is that it requires no labeled training data. The disadvantage has already been mentioned: under the best of circumstances, one might hope that the learner would recover the correct clusters, but hardly that it could correctly label the clusters. In many cases, even the correct clusters are too much to hope for. To say it another way, unsupervised learning methods rarely perform well if evaluated by the same yardstick used for supervised learners. If we expect a clustering algorithm to predict the labels in a labeled test set, without the advantage of labeled training data, we are sure to be disappointed.

The advantage of supervised learning algorithms is that they do well at the harder task: predicting the true labels for test data. The disadvantage is that they only do well if they are given enough labeled training data, but producing sufficient quantities of labeled data can be very expensive in manual effort. The aim of semisupervised learning is to have our cake and eat it, too. Semisupervised learners take as input unlabeled data and a limited source of label information, and, if successful, achieve performance comparable to that of supervised learners at significantly reduced cost in manual production of training data.

We intentionally used the vague phrase “a limited source of label information.” One source of label information is obviously labeled data, but there are alternatives. We will consider at least the following sources of label information:
  • labeled data
  • a seed classifier
  • limiting the possible labels for instances without determining a unique label
  • constraining pairs of instances to have the same, but unknown, label (co-training)
  • intrinsic label definitions
  • a budget for labeling instances selected by the learner (active learning)

The goal of unsupervised learning in computational linguistics is to enable autonomous systems to learn natural language without the need for explicit instruction or manual guidance. However, the ultimate objective is not merely to uncover interesting language structure but to acquire the correct target language. This may seem daunting since learning a target language without labeled data appears implausible. 
 
Nevertheless, semisupervised learning, which combines unsupervised and supervised learning methods, may offer a starting point. By using unsupervised learning to acquire a small amount of labeled data, semisupervised learning can potentially extend this to a complete solution. This process seems to resemble human language acquisition, where bootstrapping refers to the initial acquisition of language through explicit instruction, and distributional regularities of linguistic forms play a crucial role in extending this to the entirety of the language. Semisupervised learning methods thus provide a possible explanation for how the initial kernel of language is extended in human language acquisition.