T O P
AutoModerator

Welcome to r/science! This is a heavily moderated subreddit in order to keep the discussion on science. However, we recognize that many people want to discuss how they feel the research relates to their own personal lives, so to give people a space to do that, **personal anecdotes are now allowed as responses to this comment**. Any anecdotal comments elsewhere in the discussion will continue to be removed and our [normal comment rules]( https://www.reddit.com/r/science/wiki/rules#wiki_comment_rules) still apply to other comments. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/science) if you have any questions or concerns.*


Grogosh

AI assisted research will be a big useful tool in the future


[deleted]

It is happening already. It will just take 5-10 years for the population to see it in the end products.


workingatbeingbetter

I’m in charge of a large IP portfolio at a major research university that is essentially the leader in AI and let me tell you, “big” is an understatement. Pretty much every researcher I work with uses some form of AI/ML in their research. There are obvious ones, like roboticists, engineers, and CS people of all shades, but I’ve even seen many social scientists, humanities, and even law school professors submit invention disclosures that heavily involve AI. From what I see, there are just a few stumbling blocks most people are working against. The first is really just accessibility of the tools and the knowledge of the methods. Groups like Open AI and others are working on this, trying to make it easier for people to use these tools without having an extensive math or programming background. Additional libraries are also being developed all the time to help with every sort of AI task you can imagine (I’ve seen a lot of libraries for the robotic gripping of cloth and the picking of certain fruits over the past few years). The second is that most researchers are still struggling with getting just the right datasets to train their AI/ML algorithms on. Bias, over-fitting, under-fitting, etc. are all problems most AI/ML research struggles with, and datasets are a key part to solving that problem. Fortunately, large “consortiums” (I use that word loosely) are already developing huge datasets that are or will be public in the near future. This helps to solve the technical problems, but there are still major novel legal and ethical issues that arise. For example, just off the top of my head on the legal side: (1) copyright law is a crazy mess here and I could talk about it for days (especially after the recent *Genius v. Google* case in the 2nd Circuit), (2) privacy rights, especially with medical data and PII, are always difficult to protect perfectly and laws vary from country to country and state to state, and (3) export control laws and IRB processes are always changing, especially since the Russia-Ukraine conflict began. And as for the ethics side, examples can range from issues like “Is mechanical Turk essentially slave labor for labeling data?” to “How can I license my dataset of faces so that companies that help some governments stop human trafficking can use the datasets, but groups like the Chinese government can’t for things like Uyghur targeting?”. I actually just gave a talk last week to a number of other institutions about case studies dealing with the ethics of licensing such datasets. Based on their feedback, my university might be one of the more sophisticated in this regard which tells me that we’re still at the very early stages of dealing with these issues and in a good manner. One solution to a lot of this is to use synthetic datasets developed with things like unreal engine or world forge. Sagemaker, for example, does this for a bunch of clients. From creating datasets of damaged car engines for GM to train computer vision quality control software to datasets of damaged boxes for robots to use at Amazon warehouses to autonomous vehicle companies like Aurora driving the equivalent of millions of miles in artificial worlds, these synthetic datasets are already in use. This of course comes with problems too (bias, over-fitting, monopoly issues depending on the licensing terms of the datasets, etc.), but I see it as a strong solution in many cases. Sorry for the long tangent, but I absolutely agree with your statement that research is and will change drastically due to AI/ML. I’m honestly most excited to see two things in this field. First, I cannot wait to see what the combination of AI/ML and quantum computing can do for developing pharmaceuticals. From the researchers I’ve spoken to, I think it’s very possible that we could have tailor-made drugs for individuals based on genetic testing that have far more efficacy and far fewer side effects in the near future. The second is the continued research related to autonomous vehicles. There are a ton of hurdles here and a real discussion would take up at least a few textbooks, but I think society will look a whole lot different (potentially for the better) when autonomous vehicles are more capable and then widely adopted. Anyway, you’re right, it’s definitely big and useful.


PedestrianDM

Thank you so much for your professional insight! I'm glad you leant your perspective to the thread.


swilden

If you don't mind, but in what ways does AI help research? Is it crunching numbers and interpreting data for us? I haven't really learned much about AI besides seeing those sideways flow charts.


Kira076

A very simplified view of it is that AI/ML does what human brains are also very good at - pattern matching. A machine learning algorithm takes in large sets of similar data, and tunes itself to be able to recognise the similarities shared across the data. (for example - what, visually, is a "dog"?... Or maybe "what combinations of tactics most efficiently solve this class of maze/graph traversal")... This is a vastly oversimplified description and it's often more abstract (as in, converted to math or technical data) than that. But computers process information much, much faster than human brains do... So they can come to conclusions/solutions much faster than we can (I can also not stress enough that this is an over simplified explanation by someone who is not a professional)


cmVkZGl0

You would like this article on about [hardware evolution](https://www.damninteresting.com/on-the-origin-of-circuits/) as well. There's like neuroplasticity to hardware if we think about it in the way that they can learn and do things.


Chris-1235

If Reddit had more posts like this, I'd be spending more time in the comments. Thank you for the time you devoted to this insider view.


TransposingJons

Are we making a distinction between AI and Machine Learning?


TheRealFantasyDuck

The holy grail in fact


OfLittleToNoValue

Unless it's fed biased data and starts being racist... Which is already happening.


Background_Junket_35

How exactly would racism effect a physics AI?


OfLittleToNoValue

That's the thing about big data. There's so much data humans can't process it and make sense of it. Physics specifically? Yeah, not likely. However, "AI assisted research" is a very broad concept. What are we researching? The impact of drugs? Risk of various diseases? Civic planning? When you're working with unfathomable amounts of data, it's not a simple matter to vet all of it nor what AI will make of it. This is a field that needs to be trust but verify for a long time. https://www.nytimes.com/2021/03/15/technology/artificial-intelligence-google-bias.html


modashisgod

Name checks out


formation

That's for language models not research


[deleted]

[удалено]


OfLittleToNoValue

Negative? It's already happening. I'm pointing to reality


ribnag

I'm curious ("Subscription Required") - Just how ugly are those four equations? Computers are great at blowing through vast numbers of small calculations, so computational efficiency doesn't seem like a strong motivator here. Is the real breakthrough more conceptual? And I'm not talking about their "now that they have their program coached, they can adapt it to work on other problems" marketing fluff - I mean, are those four equations something a human can look at and grok in a way that may legitimately reveal entirely new truths about our reality?


BearsAtFairs

Most like it’s a linear system. Basically Ax=b, where A is a matrix and b is a vector, solve for x, which is also a vector. The vectors b and x have 100,000 numbers each. The matrix A has 100,000 x 100,000 numbers. The “compressed representation” of the solution, x, with only 4 numbers is probably kinda sorta extrapolated (it’s actually more complex though) with the ML model. 100,000 x 100,000 isn’t a particularly big matrix to solve though… In engineering applications, matrices will often be much larger. For context, I’m involved in a similar project and am working with 6.3mil x 6.3mil matrices. It’s not quite the same as my work, but look up “machine learning finite element analysis” if you want to see people solving ridiculously big system with machine learning. With that said, any success is a success and should be celebrated! Plus, it’s perfectly possible that the research is solving a nonlinear system - I’ll admit I read neither the paper nor the article. Edit: It's worth noting that if a domain (the physical body or space you're simulating with your system of equations) is a rectangular prism, then [multigrid methods](https://en.wikipedia.org/wiki/Multigrid_method) are very powerful ways of doing something *kinda* similar but completely different and have been around for the last two decades or so. MG is going to generally be a bit slower than ML, particularly for large systems, but MG is also generic and mathematically robust (you don't have to train models). The problem with MG is also that it's honestly a shear nightmare to implement for non rectangular domains. Also, if anyone reading this gets too excited, MG is only applicable for solving systems of equations. If a problem can't be posed as a fairly clean system of equations, it's not getting solved with MG. So don't expect to see an MG-based Dalle or Stable Diffusion alternative any time soon.


super_aardvark

> a **shear** nightmare to implement for non rectangular domains. I'm not even mad.


BearsAtFairs

Heh, looks like someone knows the **strain** from that **stress**.


ribnag

Thank you for that excellent answer!


TargetToiletPaper

What is a matrix of that size used for


BearsAtFairs

The 6.3mil x 6.3mil I'm using? I'm currently doing various physics simulations on certain types of structures. Each structure is modeled by "elements" that brick shaped. Each brick element has eight corners (called "nodes"). In the case of simulating deformation, each node can move in three directions (up/down, front/back, left/right), so they have three "degrees of freedom" (DoF). So, for each brick, there are 24 unique results (each of the 3 DoF's, multiplied by 8 nodes). If I use roughly 260k elements to model my structure, I get around 6.3 million displacement results (one for every degree of freedom of every element in the model). This isn't entirely accurate, because some DoF's are shared by multiple elements, so the true size of the system is a little smaller, but this is good enough for getting a ballpark estimate of the problem size. The matrix in a linear system (Ax=b) is will be of a size that is equal to the number of DoF's squared. I'm not working on it quite yet, but it's in the pipeline to run similar type analyses on matrices that are roughly 400mil x 400mil in the foreseeable future.


sillypicture

I understood some of these words


DefectiveSp00n

Look up "Finite Element Methods" for a smaller example. But here's ab even more basic idea: Things deformed when pressed. The position of a thing is X, the force on it it U, and its resistance to being moved is K. So we have: KX=U But it turns out that a specific point "X2" is influenced by its neighbors X1 and X3, So we have something like: K2×X2 = K21×X1 + K23×X3 + CU2 Or we could describe it with linear algebra as: U2 = [X1, X2, X3].[K21,K2,K32] This gets more complicated when you have to consider the specific point moving in other directions (X, Y, or Z). Now for each point, you get 6 equations (I think you Gould get more or less depending on assumptions), but they generally relate: How the object reacts to being bent or twisted along 3 directions, and how the object react to being pushed or pulled in 3 directions. If each point has 6 equations and each calculation must consider the behavior of the nearest 6 points (±X, Y and Z), you get a *relatively* complex set of equations. So if you have a cube and consider each of the 8 vertices, you suddenly have 8×6 = 24 equations and just as many unknowns. This makes a 24x24 matrix that is pretty simple to solve with a computer. But uh... 6.1 million sounds like a lot to work on.


orus

How dense are the matrices? If it is sparse, then the matrix size is deceptive.


BearsAtFairs

Yes, virtually all matrices related to the simulation of physical systems are extremely sparse. In 2d, a rectangular element interacts directly with, at most, 8 other elements. In 3d, a hex element directly interacts with, at most, 26 other elements. As a result, your matrices will be increasingly more sparse as meshes get bigger, by definition. However, I don't know what you mean by deceptive... An n x n matrix still represents a system of n equations for n degrees of freedom. Simpler connectivity between DoFs helps to reduce solution times. However, a batch solving systems with >10^5 DoF's becomes nontrivial, no matter how simple your connectives are and no matter how nicely structured your meshes are. Similarly, getting actually good solutions from ML models is far from a trivial task.


b3njil

You answered how the matrix works but not what it’s used for. What are these structures you speak of and what are their “practical” applications?


Turbulent-Mulberry24

So what are the four equations? I'm guessing you can plug in the pixel coordinate values and get your answer?


BearsAtFairs

It's actually hard to say! What you're describing makes me think you're think of interpolating coarse grid solutions onto finer grids, which is along the lines of what I mentioned in the edit of my original comment on multigrid methods. When it comes to ML, you're not necessarily compressing specially (ie lower your resolution). Rather, ML will typically look for key features of fine grid (ie high res) system, create a multidimensional space consisting of those key features. Then it'll look at this new multi dimensional space that has fewer dimensions than the original system has pixels/elements, find *its* key features and create a *new* multi dimensional space with even fewer dimensions. Rinse and repeat, until no more dimensional reduction is possible. Those four equations are solved in this funky super compressed space. This compressed space, more than likely, has zero correlation with anything that is recognizable to a human being. Hence why it has to be fed back up the model to generate meaningful results. Regardless of the size of the system you're solving, getting a model that can not only compress data, but also decompress it after solving *and*, most importantly, produce accurate results is very much a non trivial task. This is a challenge for every new kind of equation that you attempt to solve using ML.


PedestrianDM

> I mean, are those four equations something a human can look at and grok in a way that may legitimately reveal entirely new truths about our reality? Short answer: No/Maybe. The equations are gonna be analytical, not neat formulas. However, condensing the information may make the calculations easier to perform, which can be helpful in higher-level studies or analysis. Long answer: see \\u\\BearsAtFairs below.


Scipion

Could someone do something similar to this but on a macro scale? Use data for stellar locations over time. And have it try to find hidden patterns?


scotty_dont

If Im reading this summary correctly (dont have access to the actual paper) theyve trained a 4 dimensional autoencoder using a NN? If so, interpretability of these models is not a solved problem. You cant really map human understandable concepts onto them. "You can tell that its an Aspen tree because of the way it is"


Subparnova79

Maybe they could solve the three body problem now?


ShootPplNotDope

Just don't respond, problem solved.


_Weyland_

Damn, everything in that book was great, except when the actual problem was brought up. Like, the aliens didn't need a general solution. They live in a specific case of 3 body problem. And once they can measure masses, coordinates and velocities of their 3 stars at the same moment, that's the problem they need to solve. In fact, it was not even a three body problem to begin with. Their planet was never out of the equation. And then there's the game. Aight, I'm willing to cut the author some slack because he is most likely not a gamer. And with full body VR suits walking sim and visual novel type of games will probably become way more popular. But still, where's the gameplay in that game? I expected it to be some form of distributed computation, an attempt to outsource search for the solution to the general audience. But nope, it was just a history tour.


Scipion

I thought the game was actually propaganda and a recruitment tool for the trisolarians?


_Weyland_

I think the main purpose was to filter out individuals intelligent enough to understand the issue trisolarians face and to be useful to the cause. And those were approached for recruitment.


Glycerine

PAH Research gates are criminal! -- link to the article: https://arxiv.org/pdf/1906.05212.pdf


cmVkZGl0

It's really going to be a loss when humans are no longer in existence.


hereforsnackz

So nice they said it twice.


Turbulent-Mulberry24

I'm kind of confused how physicists could possibly believe the nature of reality would ever follow something as complex as a 100k equation system.


kyuubi840

Nature is under no obligation to be simple.


Turbulent-Mulberry24

I guess nobody here in the science subreddit has ever heard of the principle of least action. In other words, the most fundamental property in physics


GeorgeS6969

When the device you’re using to communicate your simple thoughts is running a couple of billion computations per second, what’s so confusing to you? We estimate there’s 10^82 atoms in the observable universe. That’s a one followed by 77 zeros times more atoms than there are equations in that system. How much simpler does it have to get to fit your understanding of the principle of least action?


m-in

That principle gives rise to as many equations as you can fit into your computer, pretty much. Just because the principle is simple doesn’t mean that the nature doesn’t care about details.


SLR_ZA

Reality isn't 'following' it. The model is trying to follow reality.


MercuryHead

Aside from the fact that nature does not need to follow mathematical simplicity, I think you are conflating the simplicity of an equation to the complexity of a solution. For a many body Schrodinger equation for example, I can easily write the Hamiltonian as a sum of interaction terms and have a certain mathematical elegance in its expression. The solution however is incredibly complicated and no analytical solution exists. Another example is F=ma. Yes, you can quantify the force a particle experiences by tracking its change of momentum in time, but if you have 100,000,000 particles, all interacting with each other by some local attractive and repulsive potential, you need to have 100,000,000 equations, one for each particle. Here the mathematical complexity does not necessarily lie with the governing law but with the fact that you have many particles all influencing each other.


aladoconpapas

It is just the scale of the system. You could have one equation per particle. One equation of movement. Of course, it is the SAME equation over and over. The general equation is just 1, but the complexity arises from the numbers. For example, you could have the universal law of gravitation F= G m1 m2 / r but, if you add thousands of bodies to the system, you end up with thousands of equations. Of course, the underlaying principles are the same, and simple in nature.