Psyda

Psyda

Psyda is an AI-Powered, Data-Driven company providing academic and industrial research services and products.

We harness the power of human analytics to find insights from your data and create a compelling story around it.

ijiweiTalk Ep 282: Infringement, "lying"...Why does the "omnipotent" AI fail? 30/07/2023

Our CEO appears in Chinese Think Tank's podcast interview on Generative AI.

ijiweiTalk Ep 282: Infringement, "lying"...Why does the "omnipotent" AI fail? WhyarecreatorssowaryofgenerativeAI?WhydoestheseeminglyomnipotentAIsometimesreplywithcrudelywronganswers?

12/07/2023

This week on Minhaaj Podcast Season 2 Episode 3, i have had the pleasure of having x-Microsoft and x-Deepmind Research Engineer Aleksa Gordić to talk about his journey into the world of ML. From school despising young programmer to brilliant DeepMind Researcher with only a bachelors degree, Aleksa is a scintillating example of tech talents without higher qualifications e.g. Chris Olah. He is also Linkedin Top Voice for 2023. He Youtube channel the AI Epiphany is a great resource for budding ML researchers to follow latest developments in AI research.

Here are the timestamps of my conversation with him

00:00 Intro
00:45 Dropping Out, Self Learning & Chris Olah
03:20 From Android Developer to ML Engineer
06:25 LeetCode and CodeForces, Coding vs Soft Skills
17:30 Input and Output Mode of Learning
21:41 Yugoslavian Education, Cevap Cici, Hate for Schooling
25:46 Maths Teaching, Lack of Incentivizatiion and PISA Scores around the World
29:29 Inspirational Teachers
31:50 Microsoft HoloLens Summer Camp & Apple Vision
39:26 Microsoft Research, Google Ai, OpenAI & ResNet
41:50 Culture at Microsoft vs Google, Teams & Research Areas
50:00 Proprietary vs Open Source Models, Falcon 40B, MosaicML
01:01:02 Microsoft’s Gameplan, Profits vs User Acquisition
01:10:27 Alan Turing’s Paper, Definition of ‘Machines’ & ‘Think’
01:14:05 Neuromoprhic Computing, Neuronal Pathways & Future of Hardware
01:16:18 LLM benchmark Saturation & Research Directions
01:20:37 Disinformation, Adobe Firefly and Social Fabric
01:23:33 Lawsuits against Stability AI & OpenAI, Transition from Non-profit to For-Profit
01:28:30 Politicization of AI, Supercomputing & Technological Real Politik
01:31:14 EU AI Regulation, European Innovation Stifling & Repurcussions
01:38:54 US restrictive Visa Regime, H1B Tech Visa problems & Tech Talent Moving out of the US
01:47:00 Life outside Work, Sports & Calisthenics

https://youtu.be/vpAH_IAHJK0

AI powered self serving BI with Ryan Janssen & Paul Blankely - Zenlytic 20/06/2023

AI powered self serving BI with Ryan Janssen & Paul Blankely - Zenlytic Ryan is an entrepreneur, data scientist, engineer, and former VC. He is the co-founder and CEO of Zenlytic, a SaaS business that makes a next-generation AI-...

LLMs and Coke 19/06/2023

LLMs and Coke Ryan is an entrepreneur, data scientist, engineer, and former VC. He is the co-founder and CEO of Zenlytic, a SaaS business that makes a next-generation BI ...

UMC & TSMC semiconductor rivalry in Taiwan 16/05/2023

https://medium.com//umc-tsmc-semiconductor-rivalry-in-taiwan-fd6f8ac02d64

UMC & TSMC semiconductor rivalry in Taiwan First mover disadvantage in the semiconductor race ============================================

26/04/2023

This week I intend to write on the humble beginnings of Chinese AI and its ultimate triumph over the vicissitudes of deep learning and regulatory buy-in. Through years of grit, perfectionism, and hard work, China became the undoubted leader in AI, not without its woes.

China’s AI development back decades to a crucial wellspring: the Silicon Valley lab of NEC (a Japanese multinational information technology and electronics corporation).

Last week Jeffrey Ding's phenomenal substack publication China AI News published a two-part piece on how China's current AI success traces its roots in NEC's state-of-the-art deep learning lab. After Google fully withdrew from the Chinese mainland market, Baidu was flushed with success. Robin Li, who is sensitive to technology, kept up with the dynamics of top technology teams at home and abroad, and soon realized that AI would bring new opportunities to Baidu, and began to look for talents everywhere.

When the two met, Yu Kai preached on deep learning, and Robin Li talked about his interest in speech recognition and image recognition. There was the surging excitement of two people who regretted not having met earlier. It was also on this day that Yu Kai and Robin Li made a historic mutual choice for Baidu and China’s AI landscape: Robin Li invited Yu Kai to return to China. It was only after Yu Kai joined that Baidu took the lead in injecting deep learning genes.

Robin Li prepared to establish the world's first AI laboratory named after "deep learning" – the Baidu Institute of Deep Learning (IDL).

Yu Kai was the actual operator of Baidu IDL in the early days. Under his influence, Baidu attracted a large number of outstanding AI scientists, such as Wei Xu, the founder of the PaddlePaddle platform, Chang Huang, the co-founder of Horizon, and Yuanqing Lin, the dean of Baidu Research Institute. Coincidentally, they have all visited the same place in the past: Silicon Valley’s NEC Lab.

Yihong Gong, introduced the first group of Chinese AI researchers to NEC, including Wei Xu and Shenghuo Zhu who interned at NEC while completing a doctorate in machine learning at the University of Rochester.

When the Chongqing Research Institute of the Chinese Academy of Sciences was being established in 2011, the dean Jiahu Yuan went three times to the U.S. to invite Zhou to join, so he decided to return to China to make contributions and attract talents for the country. Four years later, he further implemented face recognition technology and founded Cloudwalk.” Cloudwalk is one of China’s top computer vision companies.

For translation of the original Leiphone article published in Chinese, The past happenings of NEC Lab in Silicon Valley: the one who dragged Chinese companies into the AI era, see comments.

And they did it without shooting down childish balloons, restricting semiconductors, and questioning social media apps. Thats class. Success is the best revenge.

11/04/2023

As AI investment goes down and peak of inflated expectations is going down the spiral of Trough of disillusionment, businesses are abandoning the unfulfilled promises of AI and are focusing on fundamentals and even more, the looming geopolitical mega threats.

Last week a very questionable but exhaustive AI Index report 2023 was published by Stanford Institute for Human-Centered Artificial Intelligence (HAI). I would rather follow Nathan Benaich's State of the AI Report and Australian Strategic Policy Institute's Critical Technology report for a relatively better and less politically motivated picture.

Nonetheless here are some emerging themes in the report:

1. Industry races ahead of academia. (Academia has actually 0 contributions now as noted by better reports as opposed to what is suggested in the report)
2. The number of incidents concerning the misuse of AI is rapidly rising.
3. Performance saturation on traditional benchmarks. (Actually, there is very little to be gained on the algorithmic side, for the most part.)
4. According to Luccioni et al., 2022, BLOOM’s training run emitted 25 times more carbon than a single air traveler on a one-way trip from New York to San Francisco.
5. Global AI private investment was $91.9 billion in 2022, which represented a 26.7% decrease since 2021.
6. Chinese citizens are among those who feel the most positively about AI products and services. The Chinese government has the highest satisfaction rating in the world according to the IPSOS report. It is an undoubted AI leader as noted in literally every other report, that is unbiased and free of Western propaganda.

However, I personally believe in a more broader and interdependent analysis of the situation that includes externalities and macroeconomic variables. Without such dependencies, any report is too suggestive and leaves erudite analysts asking for more, in my humble opinion. This is exactly why Dr. Nouriel Roubini's book Megathreats despite its dystopian outlook, is such a brilliant exposé of global economic crises and their spillover effects. He is known as Dr. Doom for his predictions of the 2008 financial crisis and his attention to detail.

In chapter 8, AI Threat he seems to encourage the idea that things that can be automated must be automated but the idea must be ensconced in pragmatism and not fiction. He summarizes the chapter with these words:

“Earth may be lucky to reach the intelligence explosion of the singularity. Will a deadly pandemic finish us before the transition to machines is complete? Will climate change destroy the planet before rational machines come to the rescue? Will we suffocate under a mountain of debt? Or will the United States and China destroy the world in a military conflict as competition to control the industries of the future becomes extreme? Indeed who controls AI may become the dominant world super-power. This geopolitical rivalry forms the megathreat we turn to next.”

29/03/2023

Amazon has delisted 10 billion products for counterfeiting and has brought lawsuits with brands to keep its platform away from illegal goods. Amazon said it spent more than $700 million last year on its anti-counterfeiting efforts and has 10,000 people working on it. Does OpenAI do the same or actually profit from the opposite, without compensating the original content creators?

Millions of original content creators like Peter Nixey have complained about OpenAI's plagiarism of his work which has garnered over 1.7M views and is among the top 2% of users on StackOverflow. A popular post on Reddit/Blender is circulating the internet about the frustrations of a soon-to-be unemployed 3D artist who now uses Midjourney instead of real and creative art that his boss favors. The irony of the situation is that the AI who generates and replaces these people are trained on THEIR data to begin with.

Elon Musk, Bill Gates and other tech leaders (inclduing me) call for a pause in ‘out of control’ AI race. The letter comes just two weeks after OpenAI announced GPT-4, an even more powerful version of the technology that underpins the viral AI chatbot tool, ChatGPT. In early tests and a company demo, the technology was shown drafting lawsuits, passing standardized exams and building a working website from a hand-drawn sketch.

The letter, which was also signed by the CEO of OpenAI, said the pause should apply to AI systems “more powerful than GPT-4.” It also said independent experts should use the proposed pause to jointly develop and implement a set of shared protocols for AI tools that are safe “beyond a reasonable doubt.”

Earlier we have seen pictures of Macron and Trump being dragged in Paris protests and the Pope strolling in an oversized white puffer jacket in the streets. If that isn't an impersonation crime, what is? Who must be punished? Social media companies, proliferating it, Generative art companies, or the average citizens who are told they can be imaginative?

Who are the authors of research papers written with ChatGPT and regardless of their quality, should they be accepted as publishable work? With the amount of venality and unscrupulousness in academia, do you really think Ph.D. supervisors care about how their students write papers especially if their petty incentives are attached to it?

Over 200,000 jobs are already lost in the US and more will be cut not because of the worsening economy but because ethically decrepit organizations like OpenAI are being given a free pass by the powers-that-be while innovative companies are being asked asinine questions like 'Does your app need wifi to work?'

22/03/2023

Its about the time of the year when NVIDIA has its reputable GTC conference. GTC 2023 kicks off with its usual glamorous keynote by Jensen Huang. NVidia is yet to break out from its resistance point of $285 per share from the previous year. From negative earnings last quarter to barely acceptable earnings in a previous couple of quarters, it is far from dire straits.

However, on the technology front, it makes great leaps forward. Some of the sessions worth watching are:

- Foundations for Deploying Trustworthy AI [S51181] with Anand Kumar
- Accelerate Spark With RAPIDS For Cost Savings [S52202] with raheja
- AI and Business: Don’t Become the New Odd Couple (Presented by Deloitte) [S52383] with Christianson
- AI Business Essentials for Executives [SE52209] with Adnan Boz
- Developing State of the Art Models in a Short Time – Lessons from Kaggle Grandmasters [S51852] with liu
- How Adept and MosaicML are Advancing AI with Oracle and NVIDIA (Presented by Oracle Cloud) [S52402] with rao
- Learn How to Create Features from Tabular Data and Accelerate your Data Science Pipeline* [DLIT51195] with titericz
- Temporal Graph Learning for the Financial World [S51380] with rajput
- Winning the AI Race Without Losing Control (Presented by Dataiku) [S52387] with dougherty

In HPC and Industry section:

- AI Disruption Fueling Next Wave of Innovation (Presented by Supermicro) [S52427]
- Applications in AI to Assist Law Practitioners and In-House Legal Departments [PS51080] by
- Audio Research at NVIDIA's Applied Deep Learning Research [S51464] by shih
- Building Conversational AI Applications* [DLIW52076] by David Taubenheim
- Defining the Quantum-Accelerated Supercomputer, with Q&A from EMEA Region [S51075a] by costa
- Fireside Chat with Ilya Sutskever and Jensen Huang: AI Today and Vision of the Future, with Q&A chat in Korean, 한국어 Q&A [S52092a]

Some phenomenal new SDKs are:

NVIDIA cuOpt, for real-time logistics, introduces advanced, massively parallel algorithms that optimize vehicle routes, warehouse selection and fleet mix. Its dynamic rerouting capabilities can reduce travel time, save fuel costs and minimize idle periods, potentially saving billions for the logistics and supply chain industries.

cuQuantum, for quantum computing, enables large quantum circuits to be simulated dramatically faster, allowing quantum researchers to study a broader space of algorithms and applications.

cuNumeric, for array computing, implements the NumPy application programming interface for automatic scaling to multi-GPU and multi-node systems with zero code changes

Lots to watch and lots to learn. A kind of event that keeps you busy for weeks.

15/03/2023

A week ago very well-written NYTimes.com opinion essay by Chomsky et.al. titled 'Noam Chomsky - The False Promise of ChatGPT' took a lot of forums by storm. Unfortunately, it is falling on the deaf ears of regulators.

At the cost of immense environmental damage, unjustifiable costs, and little to no contribution to understanding humans, OpenAI recently released its ChatGPT4, a new scam on top of the previous one. Where countries around the world are putting forward legislation to curb the detriments of this disinformation and propaganda pandora box, corporations are being allowed to damage the environment and authenticity through the rote memory of an undiscerning machine.

Chomsky's debate with Gary Marcus at Web Summit brought forward some untenable allegations which should have been taken seriously and yet GPT4 has generative art bolstering the randomly created hodge podge of opinions. Factual inaccuracies from Bard to gross indeceny of Microsoft's Tay has had no impact on authorities that should have regulated the unintended consequences.

To add disgust to injury, Microsoft just laid off its entire ethics and society team within the artificial intelligence organization as part of recent layoffs that affected 10,000 employees across the company.

In recent years, the team designed a role-playing game called Judgment Call that helped designers envision potential harms that could result from AI and discuss them during product development. It was part of a larger “responsible innovation toolkit” that the team posted publicly.

More recently, the team has been working to identify risks posed by Microsoft’s adoption of OpenAI’s technology throughout its suite of products. This pattern seems now more obvious after Timnit Gebru's firing from Google.

The move leaves Microsoft without a dedicated team to ensure its AI principles are closely tied to product design at a time when the company is leading the charge to make AI tools available to the mainstream, current and former employees said.

ChatGPT exhibits something like the banality of evil: plagiarism and apathy and obviation. It summarizes the standard arguments in the literature by a kind of super-autocomplete, refuses to take a stand on anything, pleads not merely ignorance but lack of intelligence and ultimately offers a “just following orders” defense, shifting responsibility to its creators.

ChatGPT and its brethren are constitutionally unable to balance creativity with constraint. They either overgenerate (producing both truths and falsehoods, endorsing ethical and unethical decisions alike) or undergenerate (exhibiting noncommitment to any decisions and indifference to consequences). Given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity.

This is a huge step backward for humanity. A world that just became more unauthentic than it already is.

03/01/2023

While any LLM can't be expected to produce human-level text and speech by any stretch of the imagination, what it can do with the corpus it is trained is nothing short of wonderful. And what better way to test the rubber than to put it on the road?

According to the Foreign Service Institute (FSI) and their language difficulty ranking, German is in Category II (languages similar to English). So, it's not among the easiest languages nor among the hardest languages to learn.

Nouns can be feminine/masculine/neutral. There are some rules to guess the gender (based on word endings, for example), but generally you have to learn the word with its gender.
Verbs are conjugated pretty much straightforward (with exceptions!). There are fewer tenses than in English, but German also has irregular verbs. The verb is positioned on second place in the sentence, but in subordinate sentences (because, that) it comes on last place.

There are 4 cases (Nominativ, Akkusativ, Dativ, Genitiv) and this is where most people experience difficulties. Basically the definitive/indefinite member of words and adjectives change, based on the gender of the word and the role it plays in the sentence.

Example: "A kind boy" in Nominativ would be "ein netter Junge", but in Dativ would be "einem netten Jungen".
Oh, and in case you wonder where did this "-n" in the end of Jungen came from, this is the so called N-deklination, which puts "-n" in the end of some masculine words for Akkusativ, Dativ and Genitiv cases.

Now, what if we used deep learning to actually beat all that complexity into a pretty decent conversation?

A company called DeepL is actually beating FAANG companies at its own game by achieving order-of-magnitude accuracy on their translation models. TechCrunch wrote, "Tech giants Google, Microsoft and Facebook are all applying the lessons of machine learning to translation, but a small company called DeepL has outdone them all and raised the bar for the field."

Since DeepL's initial launch in 2017, they have been developing a completely new generation of neural networks (NN). Using a novel NN design, DeepL networks learn to grasp the subtle meanings of sentences and translate them to a target language in an unprecedented way. This has led to a world-renowned machine translation quality that surpasses all major tech companies.

DeepL not only keeps up with world-leading deep learning companies but continues to set new standards with its advancements in neural network mathematics and methodology. In 2020 and 2021, they launched new models that can more accurately convey the meaning of translated sentences, even overcoming the challenge of industry-specific professional jargon with great success.

Large companies like DB, Fujitsu, Elsevier, and Beiersdorf among others is using its models.

The speed at which deep learning is changing the world is beyond imagination.

Photos from Minhaaj Rehman's post 26/12/2022

Our CEO meets the Honorable Ambassador of Pakistan to Rwanda, Amir Mohammad Khan, and the Business Community from Pakistan in Rwanda. Our team reiterated its ambition to help local community and population with its knowledge and expertise in the field of AI, Technology and Education.

20/12/2022

Its been an eventful and happening year for me and as the year sets and the new year is around the corner, I am closing in on a reflective hiatus where I can analyze the lessons from myriad discussions and long brainstorming sessions with some of the most innovative leaders in the Middle East and Africa.

Like human lives, companies and even nations have cycles that repeat themselves and over time become obvious to both astute investors and transformative leaders. Our job is to correctly predict and capitalize on these cycles based on their predictors. One of these leaders is Ray Dalio who has always inspired me and is among some of the handful who made out like a bandit in the 2008 crisis. I talked about his previous books with great pleasure and his new one had always been on my reading list.

A few years ago, Ray Dalio noticed a confluence of political and economic conditions he hadn’t encountered before. They included huge debts and zero or near-zero interest rates that led to massive printing of money in the world’s three major reserve currencies; big political and social conflicts within countries, especially the US, due to the largest wealth, political and values disparities in more than 100 years; and the rising of a world power (China) to challenge the existing world power (US) and the existing world order. The last time that this confluence occurred was between 1930 and 1945.

This realization sent Dalio on a search for the repeating patterns and cause/effect relationships underlying all major changes in wealth and power over the last 500 years. In this remarkable and timely addition to his Principles series, Dalio brings readers along for his study of the major empires - including the Dutch, the British, and the American - putting into perspective the 'Big Cycle' that has driven the successes and failures of all the world’s major countries throughout history.

The Big Cycle includes the long-term debt cycle which lasts 100 years; and the short-term debt cycle which lasts around eight years.

No system of government, no economic system, no currency, and no empire lasts forever. Yet everyone is surprised when they fail.
Dealing with the future comes from understanding how things change. You can understand how things change by studying how they changed in the past.

The Long-Term Debt and Capital Markets Cycle:

* Debt has never been neither so low nor so abundant.
* The Internal Order and Disorder Cycle: Wealth, values, and political inequalities are larger than they have ever been since the 1950s.
* The External Order and Disorder Cycle: China is becoming as strong, if not stronger, than the US.

Empires’ power and wealth can be measured with the help of eight determinants:

* Education
* Competitiveness
* Innovation and technology
* Economic output
* Share of world trade
* Military strength
* Financial center strength
* Reserve currency status

Multimodal deep learning models for early detection of Alzheimer’s disease stage - Scientific Reports 06/07/2022

Multimodal deep learning models for early detection of Alzheimer’s disease stage - Scientific Reports Most current Alzheimer’s disease (AD) and mild cognitive disorders (MCI) studies use single data modality to make predictions such as AD stages. The fusion of multiple data modalities can provide a holistic view of AD staging analysis. Thus, we use deep learning (DL) to integrally analyze ima...

20/06/2022

Even though undergraduates remain my favorite audience, there are times when their teachers can prove to be an equally good crowd. On June 22nd, 2022, I am invited to deliver a keynote titled 'Neuro-Symbolic AI - Building Sentient Machines' at UNIVERSITY OF ENGINEERING AND MANAGEMENT ( BUITEMS).

The talks are part of a weeklong workshop for the undergrad faculty of Universities in the Balochistan region of Pakistan. The event is sponsored by the British Council and co-sponsored by the National Center for Robotics and Automation (NCRA), Pakistan. The talks will be attended by more than 70 Academic Professionals (mostly PhD’s) in the field of Computer Science and Electrical Engineering. I will also be visiting Control Automotive & Robotics Laboratory to see state-of-the-art research being done to find innovative solutions to national problems.

The National Centre of Robotics and Automation is a consortia of 11 labs over 13 universities of Pakistan with its centre headquarter at National University of Sciences and Technology (NUST) College of E&ME.
The Keynote will be presided by honorable Vice-Chancellor Ahmad Farooq Bazai, Sitara-e-Imtiaz, with the introduction by Dr. Anayat Ullah Baloch, Director Office of Research and Innovation (ORIC) at BUITEMS.

The abstract of my talk is given below:

AI has achieved remarkable feats in its brief history despite profound limitations of scalability and explainability. The current state of research is stagnation, given the gigantic compute costs of LLMs (Large Language Models) and the lack of reproducibility by the scientific community. Incremental increases in progress towards AGI are limited, overhyped, and expensive.

It has become obvious that to build a bio-inspired sentient machine, human senses must be reproduced, and multimodal learning must be understood in its entirety. This has led researchers to embark on 3rd Generation of AI that pursues this lofty goal. Cognitive Computing Lab at Intel Labs & MIT-IBM WATSON AI Lab is working on such daunting problems as we speak. This talk will explore some of the developments from these labs as well as some other avenues of research in the same direction.

Some of the finest researchers in the AI field are being flown in for some avant-garde sessions in Robotics, Reinforcement Learning, CAD Design, Robot Development, and AI in general. Among others event will have distinguished talks by Dr Hassan Jaleel from Lahore University of Management Sciences, Amazon Scholar Dr Franco Raimondi, Abdul Basit Memon, Muhammad Haroon Yousaf, Anayat Ullah Baloch and more.

Complete workshop program can be found here:https://www.buitms.edu.pk/News/WorkShop_Schedule.pdf

Press Release on Speakers can be found here:http://www.buitms.edu.pk/News/PressRelease.pdf

Experimental demonstration of highly reliable dynamic memristor for artificial neuron and neuromorphic computing - Nature Communications 14/06/2022

Neuromorphic computing, a computing paradigm inspired by the human brain, enables energy-efficient and fast artificial neural networks. To process information, neuromorphic computing directly mimics the operation of biological neurons in a human brain. To effectively imitate biological neurons with electrical devices, memristor-based artificial neurons attract attention because of their simple structure, energy efficiency, and excellent scalability.

However, memristor’s non-reliability issues have been one of the main obstacles for the development of memristor-based artificial neurons and neuromorphic computing.

In a paper published in Nature Communications, a couple of weeks ago researchers from Korean Advance Institute of Science and Technology (KAIST) propose a memristor 1R cross-bar array without transistor devices for individual memristor access with low variation, 100% yield, large dynamic range, and fast speed for artificial neuron and neuromorphic computing.

Based on the developed memristor, they experimentally demonstrate a memristor-based neuron with leaky-integrate and fire property with excellent reliability. Furthermore, they developed a neuro-memristive computing system based on the short-term memory effect of the developed memristor for efficient processing of sequential data. Their neuro-memristive computing system successfully trains and generates bio-medical sequential data (antimicrobial peptides) while using a small number of training parameters.

Researchers say 'Our results open up the possibility of memristor-based artificial neurons and neuromorphic computing systems, which are essential for energy-efficient edge computing devices.'

Since the idea of the first memristor by Leon Chua in 1971, the identification of memristive properties in electronic devices has attracted controversy. Experimentally, the ideal memristor has yet to be demonstrated.

If we can only find a way out of this, computing will become exponentially energy efficient, faster, and more accurate. Fingers crossed.

Experimental demonstration of highly reliable dynamic memristor for artificial neuron and neuromorphic computing - Nature Communications Designing energy efficient, uniform and reliable memristive devices for neuromorphic computing remains a challenge. By leveraging the self-rectifying behavior of gradual oxygen concentration of titanium dioxide, Choi et al. develop a transistor-free 1R cross-bar array with good uniformity and high y...

04/05/2022

Neuroscience is the root of nowadays artificial intelligence. Reading and being aware of the evolution and new insights in neuroscience not only will allow you to be a better “Artificial Intelligence” guy, but also a finer neural network architectures creator. One challenge in reaching AGI is to fully understand 100 Billion neurons and how they connect with each other also used as a connectome map.

The brain connectome is a neural connection map of the brain, a kind of wiring diagram, which helps in explaining the nervous system construction and communication. It’s very hard to get a good map, and a usual setup is a combination of Diffusion-weighted magnetic resonance imaging (DW-MRI) — and tractography. Tractography computes the integral curves over the DW-MRI vector field, to find the most likely fiber tract for each voxel. However, such a technique is highly sensitive to noise, which is impossible to avoid in DWMRI measurements.

To statistically study the variability and differences between normal and abnormal brain connectomes, a mathematical model of the neural connections is required. At the scientific computing and Imaging Institute at the University of Utah they are doing some trailblazing work to push forward our understanding of the holistic picture of the brain. 'Deep Learning the Shape of Brain Connectome' is an attempt toward this end.

In this paper, the authors represent the brain connectome as a Riemannian manifold, which allows them to model neural connections as geodesics. They showed for the first time how one can leverage deep neural networks to estimate a Riemannian metric of the brain that can accommodate fiber crossings and is a natural modeling tool to infer the shape of the brain from DWMRI. Their method achieves excellent performance in geodesic-white-matter-pathway alignment and tackles the long-standing issue in previous methods: the inability to recover the crossing fibers with high fidelity.

With the ability to robustly and efficiently model the white matter of the brain as a Riemannian manifold, one can directly apply geometrical statistical techniques such as statistical atlas construction, principal geodesic analysis, and longitudinal regression to precisely study the variability and differences in white matter architecture.

Huge shoutout to Tushar Gupta for bringing it to my attention.https://arxiv.org/pdf/2203.06122.pdf

Videos (show all)

Our CEO's upcoming fireside chat at DataQG.
Bioinformatics with Chanin Nantasenamat aka Data Professor
Neuroscience of Memory with Boris Konrad
Infonomics with Doug Laney
Psychometrics with Dr Okan Bulut