Actor Dulquer Salman, who worked under Mani Ratnam’s direction in ‘O Kadhal Kanmani’, says for any actor to work with the acclaimed filmmaker is like going to the world’s top university. Dulquer spoke about it on Saavn’s entertainment-based audio show “Take 2 with Anupama and Rajeev” podcast. The actor has been on Mani Ratnam’s sets since he was a child. The son of Malayalam superstar Mammootty used to be on the sets of ‘Thalapathi’.
How was it to have Mani Ratnam direct your father and to work with that same director as a leading man?
“It was amazing. My dad and Mani Ratnam sir have met several times even after ‘Thalapathi’ and discussed several films. They became very close while doing ‘Iruvar’ together. I have just seen Mani Sir around my house a lot and his office is literally very close to my house in Chennai,” Dulquer said. When Dulquer finally sat to work with the director himself, he was “super intimidated” by him. “With Mani sir, you kind of have to have things to talk about or get really silent. He doesn’t talk at all, so there was a moment in between the shots and I was sitting next to him and I am like ‘Say something, anything, come up with something clever’, and there is deafening silence.
“I was sure he was running through the scenes in his head. Getting Mani sir’s film for an actor is like getting into Harvard or something… Getting cast or getting a call is like some kind of accomplishment. Somewhere your work is being noticed or you have been doing something,” added the young actor, who is working across different languages. In the recent past, he has worked in movies like ‘Solo’, ‘Mahanati’ and ‘Karwaan’.
Does he find a big cultural difference in being on a Hindi film set to a set in the south?
“Honestly speaking, I connect or relate more to a Hindi film set because all my Assistant Directors and pretty much everyone I work with would have kind of grown up like me. “They would have developed in big cities, they are all fairly well-travelled, we probably read the same books and watch the same movies. But the smaller industry not so much, but they at the most have exposure to maybe Bengaluru or Mumbai… maybe not rest of the world, so (it’s the) little things like that.
“That’s the big difference that I find.”
World’s largest brain-like supercomputer switched on for first time
The world’s largest supercomputer designed to work in the same way as the human brain has been switched on for the first time. The newly formed million-processor-core Spiking Neural Network Architecture (SpiNNaker) machine is capable of completing more than 200 million actions per second, with each of its chips having 100 million transistors. To reach this point it has taken £15million in funding, 20 years in conception and over 10 years in construction, with the initial build starting way back in 2006, according to a statement.
The SpiNNaker machine, designed and built in The University of Manchester in the UK, can model more biological neurons in real time than any other machine on the planet. Biological neurons are basic brain cells present in the nervous system that communicate primarily by emitting ‘spikes’ of pure electro-chemical energy.
Neuromorphic computing uses large scale computer systems containing electronic circuits to mimic these spikes in a machine. SpiNNaker is unique because, unlike traditional computers, it does not communicate by sending large amounts of information from point A to B via a standard network.
Instead it mimics the massively parallel communication architecture of the brain, sending billions of small amounts of information simultaneously to thousands of different destinations. “SpiNNaker completely re-thinks the way conventional computers work. We’ve essentially created a machine that works more like a brain than a traditional computer, which is extremely exciting,” said Steve Furber, who conceived the initial idea for such a computer. “The ultimate objective for the project has always been a million cores in a single computer for real time brain modelling applications, and we have now achieved it, which is fantastic,” said Furber.
Researchers eventually aim to model up to a billion biological neurons in real time and are now a step closer. To give an idea of scale, a mouse brain consists of around 100 million neurons and the human brain is 1,000 times bigger than that. One billion neurons is one per cent of the scale of the human brain, which consists of just under 100 billion brain cells, or neurons, which are all highly interconnected via approximately one quadrillion synapses. One of the fundamental uses for the supercomputer is to help neuroscientists better understand how our own brain works. It does this by running extremely large scale real-time simulations which simply aren’t possible on other machines.
For example, SpiNNaker has been used to simulate high-level real-time processing in a range of isolated brain networks. This includes an 80,000 neuron model of a segment of the cortex, the outer layer of the brain that receives and processes information from the senses. It also has simulated a region of the brain called the Basal Ganglia – an area affected in Parkinson’s disease, meaning it has massive potential for neurological breakthroughs in science such as pharmaceutical testing. The power of SpiNNaker has even recently been harnessed to control a robot, the SpOmnibot. This robot uses the SpiNNaker system to interpret real-time visual information and navigate towards certain objects while ignoring others.
“Neuroscientists can now use SpiNNaker to help unlock some of the secrets of how the human brain works by running unprecedentedly large scale simulations,” Furber said. “It also works as real-time neural simulator that allows roboticists to design large scale neural networks into mobile robots so they can walk, talk and move with flexibility and low power,” he said.
AI tools may fail during key medical diagnosis: Researchers
In a first such warning when it comes to the role of Artificial Intelligence in making sense of critical health data, a team of US researchers has said AI in the medical space must be carefully tested for performance across a wide range of populations as the deep learning models may fall short.
The findings should give pause to those considering rapid deployment of AI platforms without rigorously assessing their performance in real-world clinical settings reflective of where they are being deployed, observed the team from the Icahn School of Medicine at Mount Sinai School of Medicine.
AI tools trained to detect pneumonia on chest X-rays suffered significant decreases in performance when tested on data from outside health systems, according to the study published in a special issue of PLOS Medicine on machine learning and health care.
These findings suggest that the deep learning models may not perform as accurately as expected.
“Deep learning models trained to perform medical diagnosis can generalise well, but this cannot be taken for granted since patient populations and imaging techniques differ significantly across institutions,” said Senior Author Eric Oermann, MD, Instructor in Neurosurgery at the Icahn School of Medicine at Mount Sinai.
To reach this conclusion, the researchers assessed how AI models identified pneumonia in 158,000 chest X-rays across three medical institutions — the National Institutes of Health, The Mount Sinai Hospital and Indiana University Hospital.
In three out of five comparisons, the convolutional neural networks’ (CNNs) performance in diagnosing diseases on X-rays from hospitals outside of its own network was significantly lower than on X-rays from the original health system.
However, CNNs were able to detect the hospital system where an X-ray was acquired with a high-degree of accuracy, and cheated at their predictive task based on the prevalence of pneumonia at the training institution.
“If AI systems are to be used for medical diagnosis, they must be tailored to carefully consider clinical questions, tested for a variety of real-world scenarios, and carefully assessed to determine how they impact accurate diagnosis,” explained Study’s First Author John Zech.
Lava Z81 with ‘Studio Mode’ launched in?India
Lava Z81 offers different studio lighting effects like stage light and contour light for portraits.
Lava International launched a new smartphone in India. Lava Z81 features “Studio Mode” that uses Artificial Intelligence (AI) for better pictures.
“I am sure that our consumers will enjoy the next level of smartphone photography. Z81 is a true testimony to our vision of making the valuable technologies accessible,” said Sunil Raina, President, Lava International.
Lava Z81 comes in two variants with 2GB and 3GB of RAM. The 3GB variant is priced at Rs 9,499 while the 2GB variant will be launched soon, the company said in a statement.
Lava Z81 specifications, features
Speaking about the smartphone’s highlight, ‘Studio Mode’ lets users edit their portraits with different lighting effects. Lava Z81 sports a 13-megapixel camera at the front and rear. The smartphone features a 5.7-inch HD+ display with an aspect ratio of 18:9 and Gorilla Glass 3 protection on top.
On the software front, Lava Z81 runs Android 8.1 with Star OS 5.0 layered on top. The smartphone comes with 32GB?in-built storage. Under the hood, Lava Z81 is powered by a Helio A22 quad-core processor clocked at 2.0 GHz. It houses a 3,000mAh battery. The smartphone offers face unlock in addition to a rear fingerprint sensor.
Subscribe to our mailing list and get breaking news and updates to your email inbox.
Thank you for subscribing.
Something went wrong.