Pacemaker and X rays Accidental inventionsSometimes I feel lucky that I was born in an era where technology has brought a revolution to the whole world. And today,  technology has made our life so advanced that we can get everything at our doorstep with just a single click of our smartphones. The revolutionary tools, equipments and gadgets have completely changed the way we live our lives as compared to a decade ago. With this advancement, it has become impossible for us to imagine our life without these gadgets and equipment. 

But to really appreciate the effects of technology – both its virtues and costs – we need to examine the world of humans before technology. What were our lives like without inventions? For that we need to peek back into the Palaeolithic era when technology was scarce and humans lived primarily surrounded by things they did not make. The advancements in inventions led to discoveries that changed the human lives tremendously.  A few noted accidental inventions without which life would have been unimaginably difficult are stated here. 

(Pacemaker - The original idea)

With one of our most vital organs being the heart, conditions such as arrhythmias where the heart either beats too slow, too fast, or with an irregular rhythm can have extremely detrimental effects on one’s everyday life. People may be unable to continue an active lifestyle, suffer breathing problems, and even subsequent organ damage that can lead to terminal ailments or death. A pacemaker helps assuage the problems of arrhythmias to increase longevity and help those with heart conditions to lead a healthier and more active lifestyle. Using electrical pulses, a pacemaker can regulate heartbeats to pump blood throughout the body at a normal rate.

Notable inventor, Wilson Greatbach, invented the first implantable pacemaker by accident while he was attempting to construct an oscillator that would be utilised to record different heart sounds. Pulling one of the resistors from the wrong box led to the advent of the life-saving device that is used prominently today. A rhythmic beating sound was rendered during his flub, and it was then that Greatbach decided to scrap his original invention and create an implantable pacemaker. After two years of fine-tuning the device to perfection, the pacemaker went on to be hailed as “one of the ten greatest achievements of the last 50 years by the National Society of Professional Engineers.”

(WAND - the latest developed pacemaker)

A new Neuro stimulator developed by engineers at the University of California, Berkeley, can listen to and stimulate electric current in the brain at the same time, potentially delivering fine-tuned treatments to patients with diseases like epilepsy and Parkinson’s.  The device, named the WAND, works like a "pacemaker for the brain," monitoring the brain's electrical activity and delivering electrical stimulation if it detects something amiss.

These devices can be extremely effective at preventing debilitating tremors or seizures in patients with a variety of neurological conditions. But the electrical signatures that precede a seizure or tremor can be extremely subtle, and the frequency and strength of electrical stimulation required to prevent them is equally touchy. It can take years of small adjustments by doctors before the devices provide optimal treatment.

WAND, which stands for Wireless Artifact -free Neuromodulation Device, is both wireless and autonomous, meaning that once it learns to recognise the signs of tremor or seizure, it can adjust the stimulation parameters on its own to prevent the unwanted movements. And because it is a closed-loop -- meaning it can stimulate and record simultaneously -- it can adjust these parameters in real-time.

(X-rays   - The original idea)

If we didn’t have X-rays, would we suddenly just assume we had (or didn’t have) a broken bone? Would surgeons need to merely guess which part of the body is fractured? And what would we be doing during this time when the pain of the broken bone gets unbearable and the doctor is still figuring out which one is broken?I digress…

X-rays are an integral part of the medical field, as they can show medical professionals if and where a broken bone or fracture has occurred, where a bullet is lodged, signs of pneumonia and they are also used to identify breast cancer with mammograms. The use of X-rays has become so standard in medical practice, it is hard to believe that the invention of the X-ray was a complete accident. 

In 1895, physicist Wilhelm Conrad Rontgen was spending time in his lab in Germany to try and figure out if cathode rays were able to pass through glass (you know, typical physics stuff). To block a majority of the radiation, Rontgen had set up thick pieces of cardboard around a fluorescent screen, but was in for a surprise when he noticed a strange glowing on the screen penetrating the cardboard barriers every time he switched on the cathode ray.

While others may have decided that level of radiation was terrifying and just scrapped the project, Rontgen investigated the glowing screen and found that the glowing permeated several objects. He even placed his hand in front of the screen only to be welcomed by the sight of the bones in his hands, thus discovering that the ray could penetrate almost anything except for things like bone and lead.

It took years to perfect X-rays, as scientists and doctors didn’t initially realise the harmful effects of radiation, which can cause fatal conditions like skin cancer. Today, X-rays are used widely in medicine and also in airports for extra security measures.

(CERN - the latest developed X-ray)

What if, instead of a black and white X-ray picture, a doctor of a cancer patient had access to colour images identifying the tissues being scanned? This colour X-ray imaging technique could produce clearer and more accurate pictures and help doctors give their patients more accurate diagnoses.

This is now a reality, thanks to a New-Zealand company that scanned, for the first time, a human body using a breakthrough colour medical scanner based on the Medipix3 technology developed at CERN. Father and son scientists Professors Phil and Anthony Butler from Canterbury and Otago Universities spent a decade building and refining their product.

Medipix is a family of read-out chips for particle imaging and detection. The original concept of Medipix is that it works like a camera, detecting and counting each individual particle hitting the pixels when its electronic shutter is open. This enables high-resolution, high-contrast, very reliable images, making it unique for imaging applications in particular in the medical field.

And ohh yes…all this while we thought science was difficult! I always pondered over the unimaginable inventions and cursed the scientists for being so smart! Now I know the secret behind unbelievable scientific inventions - it was purely accidental.

References :


AI and Improving Manufacturing Bottomline Article Image

Change is inevitable. We constantly need to adapt to changes surrounding us to be able to survive in the world and the manufacturing industry is no exception. Many companies have used artificial intelligence to improve their bottom line. Artificial intelligence helps identifying the flaws in the system, suggest purchases to new and returning customers and streamlines the supply chain process. In fact, according to the reports, supply chain is one area today that is leveraging the benefits of AI.

With the growth of manufacturing industries, the volume of data has also increased drastically. Hence, companies are looking for more sophisticated systems to make business intelligence processing more effective. This is the primary reason why manufacturing companies are ldepend on AI techniques to increase their productivity and increase revenue.

AI and Supply Chain Management

The prime responsibility of supply chain management is to respond to customer demands by providing exact supply match as efficiently as possible. There are three important factors that have led to an inability to match demand and supply: -

  • The inability to forecast the real demand
  • Production gaps leading to reduced supply
  • The difficulty in synchronization between different supply chain partners

All these factors lead to failures and losses because our current systems are incapable of providing correct information in a timely manner to manage the demand and supply equation. Any kind of information gap is detrimental to an efficient  supply chain. The big question is how does a company use artificial intelligence to better manage demand and supply.

Enhancing Demand Forecasting Accuracy

It is not easy to function in a supply chain environment if you are unable to forecast the demand. Traditional methods of forecasting included statistical techniques for forecasting. Historical sales data is used to predict demand. These techniques are unable to process large sums of data and have struggled from time to time in providing accurate information. However, with AI in place, it’s easier to provide precise data and improve demand forecast.

Bridging the Production Uncertainty Gaps

While working in the manufacturing industry, machines will break, and you will not always be able to deliver. This will further lead to low output, delayed shipments, and interruption in the supply chain. Artificial intelligence helps in maintaining equipment by continuously collecting information on equipment breakdowns.  Timely repairs can be scheduled based on this information that help in avoiding delays.

Smarter Inventory Management

Managing inventory is one of the biggest challenges for every supply chain manager. However, with AI’s predictive modelling, it’s easier to predict how much stock is needed and decrease or increase production, thereby bringing down the cost of holding inventory.

AI is the Future in Supply Chain Management

Adopting the latest technologies to meet higher consumer expectations and demands is the need of the hour. AI helps throughout the supply chain management process with faster turnarounds for better results. Artificial intelligence will not only make people’s lives easier but also streamline businesses.

Thursday, 14 February 2019 09:33

Automated Journalism: Did a Robot Write This?

Automated Journalism

The rise of machines taking over the human world is no longer a dystopian dream. The dread that is associated with robots taking over the world is a reality from which there’s no escape. A great example is the manufacturing sector, which had long been a supplier of well-paying jobs and was is now transformed by the introduction of machines. But what about the white-collar jobs in the information technology sector? Will the emergence of machines have an impact on professions in journalism and media?

Automated journalism is shaking up the traditional journalism and the role of a news reporter by using computer algorithms to transform raw data into news stories that mimic the ones that a human may might have written. Media has long been at the forefront of technological innovation from satellite communications to the Internet and social media. With the advent of AI and big data, it was just a matter of time when journalism would be subject to the inevitable change in the way it’s produced and distributed.

Automated journalism takes the interaction of media with machines to a whole other level. It automates each step of the news production process, from investigation to the actual production and distribution of news. The automated news stories work well with fact-based stories where structured and reliable data can be transformed into content that is not just cheaper and quicker to produce but can also be personalized to the needs of the reader. No wonder it’s perceived as a threat to traditional journalism practices and quality and a precarity to the employment within the industry.

Automated Journalism Example

As publishers struggle to cope with dwindling newspaper subscriptions, Natural Language Generator models such as Automated Insights, Yseop, and Narrative Science have developed automated reporting systems, chatbots, and algorithms that have already been adopted by larger news organizations such as Associated Press, Forbes, The New York Times and Los Angeles Times.

The Washington Post has been experimenting news stories on the Heliograf smart software that made its debut in 2016 at the Rio 2016 Olympic games. By analyzing and reporting the data related to the games as they emerged, Heliograf was able to keep up with information relating to scores and medal counts in real time, freeing up journalists so they could work on creating other content. In its first year, the Heliograf produced as staggering 850 articles and was awarded the “Excellence in the Use of Bots” for its 2016 election coverage.

Using natural language technology, the tools spot patterns and trends in raw data, match them to the relevant phrases in the story template and add the information to create a narrative that can be published across different platforms. That too in seconds. Here’s an example of an automated news story by Associated Press.


The typical uses of automated journalism include news stories on sports and finance – where it’s easier to crunch numbers and convert raw data into coherent news stories. As technology improves, robot journalism is likely to move into more challenging areas.

Machine Learning Journalism: Potential

The potential for machine learning journalism is endless. Speed and efficiency are two factors that distinguish AI article writers from their human counterparts. From monitoring technology, collecting statistics and writing and disseminating information, the potential of machine learning journalism is limitless, especially when it’s related to factual news such as weather forecasts, traffic reports, political results, and sporting events. Along with speed, one of the standout features of articles created by automated journalism is the reduction of errors and the need for human editing.

Robot Writing Limitations

Despite their multiple advantages, there are certain limitations that robot writing is plagued with. While the robot can place facts in a preset template, robots writing articles are severely lacking in the expression of emotions through the text like a journalist. The machine is unable to craft a colorful feature story or conduct an in-depth analysis of a subject. The lack of adaptability also means that robot journalists are unable to adapt to new styles of writing and new tools. Journalists fear that the widespread use of automation could lead to the loss of editorial identity in the long run.

One of the biggest disadvantages of automated journalism is the risk of fake news proliferation. Unlike a journalist, a machine is unable to detect certain flaws in a news story which can lead to the spread of fake news. For example, the Quakebot robot (specialized and designed to prevent earthquakes) accidentally announced an earthquake reporting several deaths when the earthquake was in fact 92 years old. While the cause of the mistake seems to be human, a human action caused the replacement of the date of the 1925 event by 2025, the robot failed to spot it and started disseminated the news.

Will Robots Replace Journalists

Instead of worrying about robots affecting job prospects, journalists can leverage automated journalism by using it as an assistant for focusing on data-driven stories and press releases while they focus on more complex topics that contain deeper analysis to increase their turnover and grow engagement for their brand.

According to Associated Insights, nearly twenty percent of a reporter’s time that is spent covering financial news can be freed up with the help of AI to improve accuracy and to give reporters more time to concentrate on the content and story-telling behind an article rather than the fact-checking and research. In the end, this could mean a win-win for journalism rather than its demise.



Unnat Bharat Abhiyan

Pune : Education doesn’t mean limiting a student to curriculum. It goes beyond that and in fact cocurricular and extracurricular activities play an important role in shaping a student as not mere a degree holder but a responsible citizen too. Social bonding at young age helps develop an understanding of society and people living around us. With a view to inculcate this understanding Vishwakarma University (VU) has participated in MHRD’s Unnat Bharat Abhiyan. Recently, VU has adopted 5 villages around Pune city under this project. In coming days, students and faculty together will make sure that villagers in these villages drink pure water, which is a severe issue at present.

 Unnat Bharat Abhiyan 01

What is Unnat Bharat Abhiyan?

Unnat Bharat Abhiyan is inspired by the vision of transformational change in rural development processes by leveraging knowledge institutions to help build the architecture of an Inclusive India. Technical institutes from across the country have been selected for the project and VU is one of them. The idea is to adopt villages, understand their persistent issues and find solutions with the help of technical know-how of students as well as faculty members.

 Unnat Bharat Abhiyan 02

How VU is approaching the project?

‘The aim is to sensitise students about issues of society. Many a times while living in a metro city, we are unaware of problems of people staying in villages on the border of the cities. Students should understand the real-world problems and find answers to them using available resources there. We are approaching the Unnat Bharat Abhiyan with this view,’ says Prof Maya Kurulekar of Faculty of Science and Technology who is guiding the students.

 Unnat Bharat Abhiyan 03

What did VU do as a part of the project?

VU has adopted Samrevadi, Bhadalvadi, Thopatevadi (1 and 2) and Mordari villages near Sinhgad Fort. All these villages have less than 400 households. They lack many primary facilities including pure drinking water. As the project started, first thing that was done was a survey of these households. Nodal officer Prof Kailas Bhosale has been coordinating the project for VU. He motivated as many as 67 students from the first year BTech course to do a comprehensive survey in these five villages. According to Prof Bhosale, the survey was carried out to identify the basic issues. The findings of the survey will be uploaded on the portal of Unnat Bharat Abhiyan. While doing this survey, students found that villagers were using water from wells but the quality was poor. As a result, villagers face medical issues. Students also found that medical facilities were far away from homes and villagers needed to travel quite a distance to access them. ‘After the survey was done, we decided to focus on one issue at a time. To begin we have focussed on providing pure water. We have a tie up with Wilo Foundation for Water Quality Centre of Excellence. Wilo India Limited has developed and sponsored Water ATMs for this centre where this water is processed and pure water is provided. We have one such Water ATM at VU campus. We plan to establish similar Water ATMs in these villages from where villagers can drink pure water. We also had discussion with Gram Panchayats for providing water to these Water ATMs per day for purification,’ mentions Prof Bhosale.

 Unnat Bharat Abhiyan 04

Unnat Bharat Abhiyan 06


Along with 67 first year BTech students Prof Mrunmaee Randade, Prof Sandeep kumar Shukla, Prof Jameel Ahmad Ansari, Prof Sonali Botkar, Prof Rushika and Prof Maya Kurulekar are involved in this project. Prof Kailas Bhosale is nodal officer of the project for VU.

 Unnat Bharat Abhiyan 07

Unnat Bharat Abhiyan 08

Unnat Bharat Abhiyan 10


Screen Shot 2019 01 14 at 3.38.49 PM

Sakal Today, Pune Edition dated 13th January, 2019

The activities of VU under this project will be monitored by coordinators of Unnat Bharat Abhiyan at IIT-Delhi and IIT-Bombay.

Monday, 14 January 2019 07:09


The most effective means to remain in touch with our real self and to maintain a sense of identity is to indulge in reflection and introspection. The consequences of not taking time out to pause and reflect are psychologically wide-ranging.


 "If we were not so single-minded

about keeping our lives moving,

and for once could perhaps a huge silence 

might interrupt this sadness

of never understanding ourselves

and of threatening ourselves with death."


Poet Pablo Naruda’s lines from the poem ‘Keeping Quiet’ aptly describe our existence in the hyper-real world where we all seem to be continually pushing the Sysiphian wheel. We dare not waste a minute! In our relentless race to perform and produce, reflection and introspection have become lost arts. We are consumed by a temptation to just finish this or do that. The perils of perpetually denying ourselves uninterrupted time to reflect may result in losing connection not just with others, but also with ourselves. Denying the mind to wander off freely in different directions is counter productive to creativity, deep insights, productivity and personal growth.

Doing nothing and the culture of shame.

Try telling your parents that you want to take a year off to reflect on your life. You are likely to get an earful on losing out in the race! In most cultures, doing nothing is associated with shame, guilt and wasting time.

We also associate doing nothing with boredom. So, when we have free time, we try to fill it up with distraction-inducing activities like constantly checking our phone or watching TV which stimulates our brain, giving us the impression of being happy which makes it irresistible to stop. We get sucked in the vicious circle of social media - looking at pictures, liking and commenting on posts. It might give us an impression of being productive but the fact is that social media is a reactive medium, it lacks originality and its prolonged use burn us out psychologically.

Corporate Workplaces

Recently a video went viral on social media where an employee is shown being taunted by a colleague as he is leaving office after a grueling 9 hours of work. The employee gives a long befitting reply to him as to how he has a life outside the office too! As much as the video tickles us it is relatable and rooted in reality. Unfortunately the culture of putting in long hours at work is encouraged by contemporary organizations as it is useful for them. Employees who put in long time in office are rewarded, encouraged and supported for it suits their purpose. Instead of discouraging workaholism, organizations go by the attitude that “I am paying that person a salary, why aren't they at their desk when I am still in office?” A perception is framed that people working longer, work harder. But there is no relationship between working hard and working smart. Workaholic work environments are unhealthy, they may lead to serious mental and health problems, relationship breakdowns, low motivation etc. Companies fail to realize that the best employees are those who both act and reflect. Reflection requires unplugging from the compulsion to keep busy.

Compulsory Downtime

Somewhere down the line it is becoming acceptable to live at an unhealthy pace. But remember Newton discovered the laws of gravity while contemplating under a tree! Likewise Archimedes had his eureka moment in the leisure of his bath tub! Slower rhythm of life give our mind the necessary downtime.

Downtime, which includes reflection and introspection enables not only our creativity and our need for rest, it also enables the formation and maintenance of our deep sense of being and identity. Identity is discussed in terms of the Self by the great Swiss psychiatrist Carl Jung. For Jung, integration of all our life experiences into a whole forms the Self. This Self needs to be nourished with contemplation and introspection to maintain a sense of identity.

We need a certain balance and equilibrium within ourself and with our environment. Yet in the forever connected and time-driven world we inhabit, we are far from equilibrium. The time and space for personal reflection is often consumed by long work hours, social commitments and smartphone addiction, resulting in a lack of mental peace and quiet necessary for inner stability. Without the mind having free downtime our emotional, spiritual and psychological health will suffer. It is during ‘wasting time’ or downtime that our inner most self speaks to us. The events of the past that have held personal meaning for us whisper to us in those quiet moments.

So take long walks, longer naps and indulge in day dreaming to be the best and most efficient version of you.

The author of this article, Richa Singh is a content writer with Investronaut. She is a voracious reader and a keen traveller.

Friday, 04 January 2019 12:15

Ways to Improve Creativity at Workplace


Article Post



Companies look for that X-factor that will distinguish them from their competitors and yield higher prominence to them in the market. For that, it is important that workplace potential is harnessed optimally and employees are given a conducive environment to develop their creative potential. If ever a proof of the same was required, then Google’s 20 percent program should settle it. Google allows its developers to spend 20% of a working day on side projects. The aim is to elicit, nurture and advance creativity and innovation by allowing space and time for innovation to employees. And if ever a doubt was cast on its efficacy then the origin of some of Google’s best products like Gmail, Google Talk from this scheme should quash it.

Creativity may be inborn but it needs the right environment and stimuli to flourish. As an employer it is your prerogative to ensure that such a climate exists at your workplace. But the million dollar question is: how do we produce such an environment? Let us look at a few techniques which can foster creativity in the workplace:

Article Inner Image 01

A key element in ensuring creativity is to be empathic to the process of creativity. Ensure that boredom finds no place in your work structure. Junior employees often complain, and not without reason, that they get to do repetitive, and mundane work and as the axiom goes - familiarity breeds contempt. An employee will not be motivated by an easy task which she/he is too well versed with. Their interest will be piqued when there is an element of challenge – unfamiliarity in the task. When an employee feels that there is an opportunity to learn something that he/she does not know, their interest and creative prowess will be amplified automatically.

Often a challenge brings forth the best in a person. Employees prefer a task that sets a premium upon its solution and challenges their creative potential. Take the example of the ubiquitous Walkman of the late 70’s. One day, Sony co-founder Akio Morita challenged his chief engineers to create a hi-fi no bigger than a small wooden block. The challenge fired the imagination of his engineers and led to the release of the Walkman in 1979. Creative people like challenges — the more challenging the task, the better.

Article Inner Image 02

You must ensure that creativity is placed at par with the other facets of the job, if not higher. Creative ideas which lead to an increase in efficiency should be rewarded proportionally. This will create the necessary motivation for it. However, we may ask: how should we decide on the reward? Money, bonus, awards……? I am afraid, No! they can be useful but creativity and innovation cannot be ensured by these factors only. For starters honest praise and appreciation is a good way to keep the employee motivated. Employees often treasure praise from people they respect — such as their peers, boss or mentor. Remember: “While a difficult task may be worth his while, a thankless task is not.”

Article Inner Image 03

Without implementation, no reward or incentive can motivate the employee to engage in creative thinking. The creative suggestions put forth by the employees must be turned into action. It is not enough to simply gather creative ideas. If an employee discerns that their creative ideas are not implemented, their motivation is likely to be reduced. A reward will not offer an employee the same satisfaction as its implementation.

Article Inner Image 04

Often the employees may be hesitant to divulge their ideas for fear of making mistakes. The employees need to be reassured that mistakes are a part of creative phenomenon. Creativity, in fact, works often by hit and trial method till one settles into a creative pattern. Employer support is one of the key elements of a creative workplace. If the employer is unresponsive and unsupportive, the employees will be scared away from experimenting in all likelihood.

Article Inner Image 05

Instead of providing assignments with restricted guidelines and instructions, apprise the employee of the ultimate goal of the assignment. Allow them to get the work done, as and how they please, with minimal interference on your part. Trust the employee’s capacity to deliver. This makes the employees feel motivated and recognize that they have authority and power over their fields.

Article Inner Image 06

Exchange of ideas is difficult in a situation where everybody thinks alike. Employees should be therefore hired from diverse backgrounds, qualifications and skills. Homogeneity can lead, undoubtedly, to greater team bonding and a stronger inter-personal relationship. However, this will prove to be the bane of creativity at the drawing board. A uniform and agreeable crowd can prove to be a serious impediment for creative ideas to flourish. You could consider relaxing the norms for recruiting your staff and allow for a more diverse criterion in your selection. This will permit diversity in the workplace. Hiring staff from different domains and background and allowing them to mingle around, in projects is a handy tool for ensuring creativity in the workplace. Organize more informal interactions between employees with dissimilar profiles to facilitate exchange of thoughts.

Article Inner Image 07

it’s important be fair to your employees, treat them with respect and make sure your employees never get a perception of being wronged for it leads to a total erosion of motivation and creativity.

The author of this article, Richa Singh is a content writer with Investronaut. She is a voracious reader and a keen traveller.





Thursday, 20 December 2018 09:47

Media Laws and Freedom of Speech

VU Media Laws and Freedom of Speech Article Image

Media Laws and Freedom of Speech

Tweet. Post. Express - whether it be in print or digitally and no one can harm you. But this wasn't so before 2015. In 2015, in a landmark judgement the Supreme Court struck down the controversial section 66A of the Information and Technology Act, 2000 which allowed the state to arrest people for posting offensive content. The incident that led up to this judgement was when in 2012, two young girls were arrested when one of them wrote a Facebook post criticizing the bandh following the death of Shiv Sena leader Bal Thackeray, while the other one ‘liked’ it. The incident raised critical questions about the freedom of expression in the country. What resulted was that ‘legally’ one could express themselves without fear of retribution.

There is a whole framework of Media laws in India that affect our lives directly or indirectly. These laws are closely tied up with our fundamental right of freedom of speech, enshrined in the Constitution of India under Article 19(1). Media laws in India encompass all the legal issues related to censorship, copyright, Information Technology(IT) , defamation, broadcasting, privacy, telecommunication, entertainment, advertising and confidentiality in any form of media like TV, film, music, publishing and internet.

Media, often called as the fourth pillar of democracy is an important institution in a democratic set up where conflicting ideas can be debated. A critical press is the watchdog of a thriving democracy. It needs to be protected at all times. In 1950, ‘we the people’ of India gave ourselves the security of a Constitution which protects our fundamental right of speech amongst other rights. But free speech isn't absolute in India. Article 19 (2) that follows prohibits us from circulating content that is not in the interests of the sovereignty and integrity of India, the security of the State, friendly relations with foreign States, public order, decency or morality or in relation to contempt of court, defamation or incitement to violence.

Blurred Boundaries

What constitutes freedom and where do we draw the line? Where does freedom of speech end and laws begin? Laws and freedom of speech often find themselves locking horns, forming an interesting trajectory of events. Most laws are subject to interpretation by the court. What is freedom to one could be offensive to the other. That is why over the years content in the form of books, films, plays, advertisements, speeches etc. has ruffled many feathers leading up to court cases. Courts and establishments have upheld or diluted free speech over the years. Let’s run through some of the defining moments in Indian history where media curbs became the talking point for the freedom of speech.

1977 - Emergency in India from 1975 to 1977 saw censorship in the press. Many noted journalists and opposition leaders had to languish in jails. Films and stage plays felt their share of the heat. The film ‘Kissa Kursi ka’, a satirical take on the emergency, directed by Amrit Nahata was banned and its copies confiscated by the then establishment. Gulzar’s film ‘Andhi’ also met the same fate. The movie said to be based on the relationship between the then Prime Minister Indira Gandhi and her estranged husband, was denied a full release. The film was banned in 1975 after a few months after its release.

1989 - Salman Rushdie’s book- ‘The Satanic Verses’ was banned in India for allegedly hurting the sentiments of Muslims. The book lead to a fatwa against Rushdie by the then Iranian leader Ayatollah Khomeini leading to Salman Rushdie hiding underground for as long as a decade. The ban lead to a widespread debate about the freedom of expression across the world. The amusing part remains that no one in the government bothered to read the book before issuing the diktat.

2015 - Online content though still outside the purview of the censor board in India comes under attack from time to time. ‘AIB Roast’, a comedy show was taken down from Youtube by the company, after charges of obscenity mounted.

2015 - What makes us uncomfortable should be banned! seem to be the criterion of banning content in our country. ‘India’s Daughter’, a documentary made by a British film maker Leslee Udwin after the infamous Nirbhaya rape in Delhi was banned by the government fearing that it might portray India in a bad light.

2017 - The independent online news portal ‘The Wire’ broke the story about the 16,000 times spike in the revenue of BJP President’s son Jay Shaha’s company. The story got considerable traction and was widely read. Shortly after, journalist Rohini Singh and The Wire were slapped with a defamation case. The Wire rejected Supreme Court’s appeal to settle the matter outside the court citing that the article was based on facts and aimed at informing the public. The court later said “There can not be gagging of press” and The Wire does not owe an apology to anyone.

2018 - The Aadhar debate has reached the highest court in the country. The government intends to make it mandatory for something as simple as buying a sim card. It has raised issues of individual privacy and it’s potential misuse. A team of young lawyers are fighting it in the Supreme Court to spread the word that the right to privacy is “non-negotiable”. Their fight will decisively architect Indian lives in the age of unique IDs, technology and machines.

More often than not freedom has to be fought for and fiercely guarded. Freedom of speech is an ongoing debate. It is manifested in every individual decision and the decisions taken by the State for us. Any which way it is a cause worth fighting for. Taken for granted and it slips away in a wink! That is why the envelop has to be pushed everyday. Young law graduates have a lot to contribute to this field. A lot of universities are offering a specialization in Media Laws. Vishwakarma University, Pune has a robust L.L.B and L.L.M. programme. If you passionately feel about the issue, allow yourself a chance master these laws and contribute to a free, unmuzzled and healthy society.

The author of this article, Richa Singh is a content writer with Investronaut. She is a voracious reader and a keen traveller.

Thursday, 20 December 2018 09:11



additive manufacturing

Nomination for the 20th century’s wonder of the industrial revolution will certainly be bagged by ‘Additive Manufacturing’. Additive Manufacturing (AM) needs to be seriously evaluated as a complement to existing and traditional manufacturing methods.

An overview of Subtractive Manufacturing

The traditional subtractive manufacturing continues to dominate major production and assembly lines such as automobiles, FMCG, electronics and many more.

In subtractive manufacturing, you start with a hunk of material. The unwanted material is removed from this hunk bit-by-bit until you reach the final shape. Subtractive manufacturing depends heavily on the use of CNC (Computer Numerical Control)machines. CNC machines allow the required shape of the product to be programmed by a computer. This programming data enables the operations to be done repeatedly over and over again.

There are a lot of subtractive manufacturing approaches and methods. But milling and turning are the most common. A CNC mill has a rotary tool, that cuts away metal, wood, foam and any other material forming the final product.
Turning is done using a lathe. Here you spin the part into a stationary cutter while erosion and grinding are used for removing smaller amounts of material.

The limitations of cutting and drilling technology severely restricts the creation of hollow parts from a single piece. It also restricts the number of details that can be created with a single tool. Despite these technical limitations, subtractive manufacturing delivers precision.

In subtractive manufacturing, air costs money. Meaning, when the material has to be removed from the original stock, cycle time is incurred and thus money.
The complexity is limited to what can be removed. Subtractive is better when the amount of material to be removed from the stock is relatively little. A block with few holes or features is typically much cheaper to machine.

The traditional manufacturing method has ceased to be innovative lately. Additive manufacturing has made its entry and has set the market on fire with its never ending advantages. Additive certainly promises to overcome the drawbacks of subtractive manufacturing, but it won’t replace the latter. Let us now take a look at additive from the revolutionary point of view.

As a Mechanical Engineering graduate, it is obvious for us to look at valid reasons to replace the trust in traditional manufacturing. The newer technology has to withstand the market forces at play.

A success story when AM was used to save a life :

A 32-year-old woman with tuberculosis of the spine suffered severe damage to her first, second and third cervical vertebrae, threatening her with paralysis and even death. She had no support between her skull and lower spine. The disease had caused such extensive damage that surgery could not fix. Coupled with that, she had a lowered immune system caused by drugs which she was taking for infertility. The only option was to support the skull and vertebral column with some rigid replica of the bones structure. A team of surgeons decided to experiment using a titanium implant, customised to perfectly fit her spine. They replaced her damaged vertebrae with a 3D printed implant. This 3D printed implant was tested for biomechanics and stress risers, with input from multiple design teams. And when everybody around had lost hopes of her recovery, 3D printed implant proved a boon and saved her life!
Here let us try to decode AM and take a better look at it.

Additive Manufacturing

What is Additive Manufacturing?
Additive manufacturing, also known as 3D printing, is a process that creates a physical object from a digital design. Additive manufacturing uses computer-aided-design (CAD) software or 3D object scanners to direct hardware to deposit material, layer upon layer, in precise geometric shapes. As its name implies, additive manufacturing adds material to create an object.

How does additive manufacturing work?

The term “additive manufacturing” is a technology that produces three-dimensional objects, one superfine layer at a time. Each successive layer bonds to the preceding layer of melted or partially melted material. It is possible to use different substances for layering material, including metal powder, thermoplastics, ceramics, composites, glass and even edibles like chocolates.

Objects are digitally defined by computer-aided-design (CAD) software that essentially slice the object into ultra-thin layers. This information guides the print head as it precisely deposits material upon the preceding layer. Or, a laser or electron beam selectively melts the bed of powdered material to form a layer. As materials cool, they fuse together to form a three-dimensional object.

The journey towards these 3D object is revolutionising manufacturing. Gone are the intermediary steps, like the creation of moulds or dies, that cost time and money

Additive manufacturing advantages

The strengths of Additive Manufacturing lie in those areas where conventional manufacturing has its limitations. The technology is of interest where a new approach to design and manufacturing is required so as to come up with solutions. It enables a design-driven manufacturing process - where design determines production and not the other way around.

What is more, Additive Manufacturing allows for highly complex structures which can still be extremely light and stable. It provides a high degree of design freedom, optimisation and integration of functional features, the manufacture of small batch sizes at reasonable unit costs and a high degree of product customisation even in serial production.

When compared to the relative tedium of traditional manufacturing, AM offers a more dynamic, design-driven process.

Complex geometries

The technology enables engineers to design parts that incorporate complexity that is not possible using other methods. Intricate features can be incorporated directly into a design. Parts that previously required assembly and welding can now be produced as a single part, which makes for greater strength and durability. Designers are no longer restricted to the limitations of traditional machines and can create parts with greater design freedom.

Time saving

Parts are manufactured directly from a 3D CAD file, which eliminates the cost and lengthy process of having fixtures or dies created. Plus, changes can be made mid-stream with virtually no interruption in the process. Since AM has adapted to the digital-to-digital process, it eliminates traditional intermediate steps and it is possible to make alterations on the run.

Weight saving

In designing everything from bridges to skyscrapers, engineers have long sought to minimise weight while maximising strength. By incorporating organic structures into designs, designers can eliminate substantial weight while maintaining the part’s strength and integrity. With additive manufacturing, designers realise the dream of utilising organic structures to greatly reduce the weight of objects.

The voice behind this article is Ashwini Gaikwad,Content Writer, Investronaut.

Rohit Karnatak and Prakash Patil are the in-house graphic designers with Investronaut. While seated in a snug corner of our office, I tried to make sense of their journey through the lights and shades of their profession. What is the nature and scope of this profession? Read to find out.

The fundamentals of graphic design

1. Explain to the lay reader out there who is a 'graphic designer' ? How does one become one and what qualities/traits must an aspiring graphic designer have? 

Response - Simply put, bringing inanimate text to life visually!  There is no one job that a ‘graphic designer’ does, rather a designer offers a bouquet of services to make the product visually appealing. 

A graphic designer assembles images, illustrations, typography, vector elements (shapes) and colors to bring a design to life. Graphic designers play a vital role in the animation and the publishing industry. They design sets, backgrounds and logos. Graphic designers also work as user interface designers, brand identity designer, web and app designers. The options are numerous. 

Being a creative profession, an eye for color, understanding of shapes and a creative bent of mind is a necessity. A degree in graphic design can teach you the basics but like any other field most of the learning takes places 'on the job’.  To succeed as a designer, you need to keep your eyes open, be observant and one must learn to think on your feet. 

2. Please walk us through your journey to become a designer.

Rohit - I am self taught. I was working as a Computer Engineer when by a slice of good luck, I was assigned a project to develop a website. I was expected to learn and use Photoshop for it. That’s when I discovered the joy of creating something new! I instantly knew that this is what I wanted to pursue further. 

Initially, I relied on video tutorials to learn the basics. It was tough as I had no prior training in the field. My team in Flexton Inc., the company I was working for at the time, understood quickly my inclination towards graphic designing. So, I was deliberately assigned design-related projects which turned out to be a blessing indeed! It’s been four years since I have been working as a graphic designer now, and I have not looked back ever since.

Prakash - I was never good at studies but I excelled in drawing and painting. My teachers in school spotted that and encouraged me to take up Elementary and Intermediate art exams. That gave me confidence and I decided to study arts after school. My two years in art school and later a course in graphic designing gave me a solid grounding and understanding of the field. 

A degree in graphic designing is not mandatory but it teaches you how to brainstorm ideas, inculcate a sense of colors and teaches you the importance of observation. 

In the last 8 years my journey as a graphic designer has given me immense creative gratification and diverse range of  assignments have fine tuned my craft. 

3. Drawing from personal experiences, as well as those you have known, how does a graphic designer contribute to the development, marketing and distribution of a product?

A sample of Rohit Karnatak’s work showcasing his angst against capitalism, hollow notions of being cool and his idea of boundless imagination. 

Fundamentals of Graphic Designing rohit

Rohit & Prakash - A graphic designer’s contribution is immense! See human beings are mostly and primarily visual. We see first and then decide in most cases. This is where the designer comes in. Graphic Designers transform a product into a brand. Graphic designers channel their creativity to design  campaigns that intrigue, and delight the customers and so customers stay with them longer! A longer stay naturally means greater revenues. Therefore the designer amplifies the message  visually which in turn which motivates the customer to try the product. An article on the Web or curiosity towards a book is greatly influenced by the cover image. As they say the first impression is the last impression, and so we often judge a book by it’s cover. Be it a hoarding, a logo or an advertisement, it is the designer’s job to get it right. Their work will go a long way towards determining the reach and popularity of a product. 

4. From your own experience and of others, what are the possible challenges a graphic designer faces in course of work? How must they be  overcome?

 Rohit & Prakash -  The biggest challenge is to not be repetitive. If we are promoting three brands of soap, each one has to have a distinct identity. So, one has to have a fresh perspective each time, with new ideas that can be translated into new designs, videos and images. 

Some brands like Mercedes and Wills have established their brand identity with  minimalistic designs - a simple logo against a dark background. They apparently look very simple, but it is actually the result of a careful thought by the designer. But once the logo is established, the designer’s job increases manifold, as the template is set. Now the designer has to walk a tightrope between keeping the brand identity intact yet vary it so as to keep the campaign fresh, else stagnant waters stink, and familiarity breeds contempt. 

Another challenge is to keep pace with the new softwares that come to the market every so often. One has to learn them if you don't want to turn obsolete. 

On a personal level there are days when you are not feeling particularly inspired and creative. Yet, on some days the work is urgent, and there is no scope to procrastinate. Then, the challenge is to overcome that and find ways to keep the creative juices flowing. The trick to keeping monotony at bay is to expand your canvas. As with every art, so with design. The designer’s first audience must be one’s own self - create primarily for yourself and savor the joy of creating, without having a client or product in mind. 

A sample of Prakash’s work for leading Investronaut clients

Fundamentals of Graphic Designing Praksh

5. Innovation is the favourite word of the corporate world these days. Do graphic designers need to innovate? Do you for example recall any time you innovated?

Response - Of course! Variety is the spice of life. Graphic Designers are no different. Innovation is the key to evolve and sustain in a competitive world. You have to change your design and approach with the changing pulse and taste of the audience. You have to constantly figure out what will click with the masses. In our current job too we innovate and try to inculcate an element of humour, video content, mailers etc. We have to constantly think of new tricks to engage new audiences in our work and keep the regular ones hooked. 

6. And finally what are the career opportunities and advancement options that are available to a graphic designer?

Rohit & Prakash - There is an evolving hierarchy and career path one can follow as a Graphic Designer. After honing your skills for a year or two as an intern or a Junior Graphic Designer, one can become a Graphic Designer. After 5-10 years of experience one can join as a Senior Graphic Designer. Another 5 years in your kitty and you become a Visual Graphic Designer. All this culminates with one becoming a Creative Head after 18-20 years of experience. 

Life as a Graphic Designer is like a gust of fresh air everyday. It forces you to challenge the limits of what you already know. It is financially rewarding and creatively gratifying as a career. If you are fresh out of school, a graduate or someone stuck in a stifling profession and looking to unleash your creativity and make a decent living out of it too! This could be your  best bet!

The author of this article, Richa Singh is a content writer with Investronaut. She is a voracious reader and a keen traveller.


Wednesday, 14 November 2018 04:58


VG LIBERAL ARTS article Image



Keeping up with globalization has lead to an enquiry of knowledge domains, previously unexplored. The explosion of knowledge has meant that while specializations are in demand, yet subject parochialism can no longer pass off as acceptable. The previous yardstick of measuring knowledge value in ways no longer hold true. It demands a renewed conversation on the practice of knowledge production at this particular contemporary moment, and a broader conversation on the activities and institutions that shape an understanding of the utility and nature of such knowledge.

Binaries of class, caste etc have crept into education as well. As with all binaries, one entity invariably takes on a superior position. There are two distinct camps in higher education - Science, Technology, Engineering and Maths (STEM) vs Humanities. While STEM is seen to be practical, real, with high employment potential, liberal arts is viewed to be elitist, self- indulgent and vague. This misunderstanding largely emerges from understanding liberal arts to be synonymous with humanities. To make clear the confusion: Liberal arts is a fusion of pure sciences AND humanities. It defies the straight jacketed distinction of arts and science.


A liberal arts education is not about learning any one kind of content or text book. It’s about learning how to synthesize novel ideas, develop critical thinking, develop an aptitude to research, ability to adapt to new situations, meaningful enquiries, to be tolerant of differences, develop problem solving abilities, effective communication, clarity of concepts and thinking etc. We have falsely come to believe that education is only about collecting degrees and finding employment. Yes! it is that too but if one hopes to attain success in a professional career one needs more than that.

The world is a complex place and there are no linear solutions to its problems. Problems like climate change, hunger, terrorism etc. cannot be resolved using parochial straight-jacketed solutions. They need a multidimensional approach.

The dominant thought is that only a degree in engineering or management can secure you a job. It is far from the truth. Media, fashion, education, publishing, commerce are some of the industries that do not involve STEM and yet provide livelihood. Even the IT industry requires all sorts of non technical employees to run the company. In the evolving global employment landscape, employees that can work in multi-professional teams and adopt holistic approaches to problem-solving are preferred over the ones who bring limited skills to the table. Only a scientific or technological education devoid of any social context makes a tool out of an employee not a thinker. Employers are looking for people who can find innovative solutions to problems and can approach the issue at hand from different angles.

Indira Nooyi, CEO, PepsiCo, an inspiration to many, in an interview said that apart from hard work one needs to be well informed and have extraordinary communication skills to climb the ladder in any high-tech industry. An interdisciplinary education is a must to make us a wholesome individuals and inculcate these skills in the young graduates.


I believe that in the contemporary times of great division and bigotry, a liberal arts education is more important than ever. It forces us to admit and understand that a uniform world view is dangerous and boring. That world is fluid, there are no concrete truths and no one ‘right’ answer.

A liberal arts education ignites the passion for rational debate, the ability to ask uncomfortable questions, question the status quo and introduce students to an ever expanding world of ideas. It leans toward openness instead of containment. It forces us to continually revisit our view point, understand our own position in the world and broaden our ideological borders. Most importantly it makes us realize that it’s ok to not subscribe to the uniform notions in any walk of life. That people and cultures other than ours are as human and as real as our own.

In my view liberal arts education enables one to embark the path of innovation and creativity in whichever career one chooses to pursue. It is those who can think nimbly and responsibly who end up building bright careers.

The author of this article, Richa Singh is a content writer with Investronaut. She is a voracious reader and a keen traveller.

Page 1 of 5

HITS: Web Counter

Copyright © 2019 Vishwakarma University