February 28, 2021

My struggle life,


 Embracing the Struggle in LifeAnd What I’ve Learned from the Pain“We all make mistakes, have struggles, and even regret things in our past. But you are not your mistakes, you are not your struggles, and you are here NOW with the power to shape your day and your future.”― Steve MaraboliFor many of us, life is a struggle. Unless you are lucky, you will have periods in your life where things will not go your way, and it will be difficult to get through these periods. We all struggle but for some, that struggle is extremely hard to manage.I’ve struggled a lot over the past couple of years. And when I look back now, I see I was stuck. I was stuck in a job I hated and in a life which made me miserable. When you are stuck, you aren’t moving. I couldn’t go back because I was already as far back as I could go. I was at the bottom. Both emotionally and mentally.I struggled daily. With my thoughts, my mental health, myself.Life sucked.It sucked because I wasn’t going anywhere or moving towards anything. I stayed in this bubble of self-hatred, negativity, and self-doubt while the world around me continued to move. And I learned to remain comfortable in my misery because it was all I knew.Life will suck for you sometimes too.Photo by Filip Mroz on UnsplashIt’s kind of like some of my runs. Some runs just suck.I’m an avid runner. Sometimes I can run for hours and feel like I’m on top of the world. Other times, a 30-minute run feels like torture. And I had a recent short run where I struggled. It could be due to all kinds of factors. Maybe I didn’t get enough sleep. Or it could be because of that glazed doughnut I ate which is something I rarely do before a run. It could be because my body needs to recover more.But more than likely, it was because of the cold. I live in the desert and even though it doesn’t get frigid, my body is used to hot. I have yet to fully adapt to the colder temperatures, and I will struggle until I get used to it again.A new season, a new struggle.This is like life. Our life has different seasons and some of those seasons are better than others. Like the seasons change, so do our lives from time to time.Like I have to adapt to the cold for my running, we all have to adapt to the changes we experience–to the struggles we go through.And because I have yet to adapt to the cold, my recent runs have been a little slower than usual, and tomorrow’s run may be even slower.But even when I struggle through those runs, I’m moving forward. Just because I may move a little slower doesn’t mean I’m not progressing. I will still accomplish my goal of finishing the run, albeit a little later than I intended. But I’m moving.Photo by Matt Le on UnsplashYet, what I’ve learned about struggling in life and in my runs is tomorrow is always another day for redemption. I can come back the next day and have a better run. I can have a better day. And I don’t have to let my previous struggles define who I am or where I’m going.Moving forward means doing something every day to improve my life. Moving forward means even when I’m struggling, I don’t let those struggles stop me. I don’t let them keep me from writing or running. I don’t allow my struggles to keep me from having a positive attitude.Let’s be honest, your life will involve difficulty. Some days will be better than others and there is really no way around it. There is pain in life.However, I’ve learned to embrace the pain that comes along with struggling. Because that pain we are experiencing is a wake-up call. Pain can be good for us because it tells us something needs to change. It tells us something is wrong and we need to stop and evaluate where the pain is coming from. Then once we recognize it, we can fix it.My struggles have made me better. That pain has taught me what is important in my life. It has taught me while life can be tough, it also gets better. We will always encounter problems but what we do to deal with those problems is what matters most.Like my previous run doesn’t have to define my next one, you don’t have to let your previous struggles define your life.So I will keep struggling forward instead of remaining stuck. I hope you join me.Related



February 27, 2021

PERIODIC

 

Recreation Of The Periodic Table With An Unsupervised Machine Learning Algorithm

Computational workflow

The workflow of the PTG begins by specifying a set of point clouds, called ‘nodes’ hereafter, in a low-dimensional latent space to which chemical elements with observed physicochemical features are assigned. The nodes can take any positional structure such as equally spaced grid points on a rectangular for an ordinal table, spiral, cuboid, cylinder, cone, and so on. A Gaussian process (GP) model17 is used to map the pre-defined nodes to the higher-dimensional feature space in which the element data are distributed. A trained GP defines a manifold in the feature space to be fitted with respect to the observed element data. The smoothness of the manifold is governed by a specified covariance function called the kernel function, which associates the similarity of nodes in the latent space with that in the feature space. The estimated GP defines a posterior probability or responsibility of each chemical element belonging to one of the nodes. An element is assigned to one node with the highest posterior probability.

As indicated by the failure of some existing methods of statistical dimension reduction, such as PCA, t-SNE, and LLE, the manifold surface of the mapping from chemical elements to their physiochemical properties is highly complex. Therefore, we adopted the GTM-LDLV as a model of PTG, which is a GTM that can model locally varying smoothness in the manifold. To ensure non-overlapping assignments such that no multiple elements shared the same node, we operated the GTM-LDLV with the constraint of one-to-one matching between nodes and elements. To satisfy this, the number of nodes, \(K\), has to be larger than the number of elements, \(N\). However, a direct learning with \(K>N\) suffers from high computational costs and instability of the estimation performance. Specifically, the use of redundant nodes leads to many suboptimal solutions corresponding to undesirable matchings to the chemical elements. To alleviate this problem, the PTG was designed to take a three-step procedure (Fig. 1) that relies on a coarse-to-fine strategy. In the first step, we operated the training of GTM-LDLV with a small set of nodes such that \(K<N\). In the following step, we generated additional nodes such that \(K>N\), and the expanded node-set was transferred to the feature space by performing the interpolative prediction made by the given GTM-LDLV. Finally, the pre-trained model was fine-tuned subject to the one-to-one matching between the \(N\) elements and the \(K\) nodes for tabular construction. The procedure for each step is detailed below.

Figure 1

Workflow of PTG that relies on a three-step coarse-to-fine strategy to reduce the occurrence of undesirable matching between chemical elements and redundant nodes.

Step 1 (GTM-LDLV): the first step of the PTG is the same as the original GTM-LDLV. In the GTM-LDLV, \(K\) nodes, \({{\varvec{u}}}_{1}, \dots , {{\varvec{u}}}_{K}\), arbitrarily arranged in the \(L\)-dimensional latent space are first prepared. Then we build a nonlinear function \({\varvec{f}}({{\varvec{u}}}_{k})\) that maps the pre-defined nodes to the \(D\)-dimensional feature space. The model \({\varvec{f}}({{\varvec{u}}}_{k})\) defines an \(L\)-dimensional manifold in the \(D\)-dimensional feature space, which is fitted with respect the \(N\) data points of element features. The dimension of the latent space is set to \(L\le 3\) for visualization.

It is assumed that the \(D\)-dimensional feature vector \({{\varvec{x}}}_{n}\) of element \(n\) is generated independently from a mixture of K Gaussian distributions, where the mixing rates are all equal to \(1/K\), and the mean and the covariance matrix of each distribution are \({{\varvec{y}}}_{k}={\varvec{f}}\left({{\varvec{u}}}_{k}\right)\) and \({\beta }^{-1}{\varvec{I}}\), respectively (\({\varvec{I}}\) denotes the identity matrix). According to the GTM-LDLV, the mean \({\varvec{f}}({{\varvec{u}}}_{k})\) is modelled to be the product of two functions, a \(D\)-dimensional vector-valued function \({\varvec{h}}({{\varvec{u}}}_{k})\) and a positive scalar function \(g({{\varvec{u}}}_{k})\). Here, we introduce a vector of \(K\) latent variables, \({{\varvec{z}}}_{n}={({z}_{1n},\dots ,{z}_{Kn})}^{^{\prime}}\), that indicates the assignment of element \(n\) to one of the given \(K\) nodes. The \(k\)th entry \({z}_{kn}\) takes the value of 1 if \({{\varvec{x}}}_{n}\) is generated by the \(k\)th component distribution, and 0 otherwise. Here, let \({\varvec{X}}\) denote a matrix of \({{\varvec{x}}}_{1}, \dots , {{\varvec{x}}}_{N}\) of the elements, and \({\varvec{Z}}\) be a matrix of \({{\varvec{z}}}_{1}, \dots , {{\varvec{z}}}_{N}\). Then, their joint distribution is given by

$$\begin{array}{c}p\left({\varvec{X}},{\varvec{Z}}|{\varvec{g}},{\varvec{H}},\beta \right)={K}^{-N}\prod_{n=1}^{N}\prod_{k=1}^{K}{N\left({{\varvec{x}}}_{n}|{{\varvec{y}}}_{k},{\beta }^{-1}{\varvec{I}} \right)}^{{z}_{kn}},\end{array}$$

(1)

$$\begin{array}{c}{{\varvec{y}}}_{k}=f\left({{\varvec{u}}}_{k}\right)= g\left({{\varvec{u}}}_{k}\right)h\left({{\varvec{u}}}_{k}\right),\end{array}$$

(2)

where \(N\left(\cdot |{\varvec{\mu}},{\varvec{\Sigma}}\right)\) denotes the Gaussian density function with mean \({\varvec{\mu}}\) and covariance matrix \({\varvec{\Sigma}}\), \({\varvec{g}}\) is a vector of \(g\left({{\varvec{u}}}_{k}\right) \left(k=1,\dots , K\right)\), and \({\varvec{H}}\) is a matrix of \({\varvec{h}}\left({{\varvec{u}}}_{k}\right) \left(k=1,\dots ,K\right)\).

The prior distribution of \(g({\varvec{u}})\) is given as a truncated GP with mean 0 and covariance function \({c}_{g}({{\varvec{u}}}_{i},{{\varvec{u}}}_{j};{{\varvec{\xi}}}_{g})\), which handles positive-bounded random functions. The prior distribution of the \(d\)th entry \({h}_{d}({\varvec{u}})\) of \({\varvec{h}}({\varvec{u}})\) is given as a GP with mean \(0\) and covariance function \({c}_{h}({{\varvec{u}}}_{i},{{\varvec{u}}}_{j})\). To be specific, the covariance functions, \({c}_{g}({{\varvec{u}}}_{i},{{\varvec{u}}}_{j};{{\varvec{\xi}}}_{g})\) and \({c}_{h}({{\varvec{u}}}_{i},{{\varvec{u}}}_{j})\), are given by

$$\begin{array}{c}{c}_{g}\left({{\varvec{u}}}_{i},{{\varvec{u}}}_{j};{{\varvec{\xi}}}_{g}\right)={\nu }_{g}\bullet {\text{e}}{\text{x}}{\text{p}}\left(-\frac{{\Vert {{\varvec{u}}}_{i}-{{\varvec{u}}}_{j}\Vert }^{2}}{2{l}_{g}}\right),\end{array}$$

(3)

$$\begin{array}{c}{c}_{h}\left({{\varvec{u}}}_{i},{{\varvec{u}}}_{j}\right)={\left\{\frac{2l\left({{\varvec{u}}}_{i}\right)l\left({{\varvec{u}}}_{j}\right)}{{l}^{2}\left({{\varvec{u}}}_{i}\right)+{l}^{2}\left({{\varvec{u}}}_{j}\right)}\right\}}^\frac{L}{2}{\text{e}}{\text{x}}{\text{p}}\left(-\frac{{\Vert {{\varvec{u}}}_{i}-{{\varvec{u}}}_{j}\Vert }^{2}}{{l}^{2}\left({{\varvec{u}}}_{i}\right)+{l}^{2}\left({{\varvec{u}}}_{j}\right)}\right).\end{array}$$

(4)

In Eq. (3), the hyperparameter \({{\varvec{\xi}}}_{g}\) consists of \({\nu }_{g}\) and \({l}_{g}\), referred to as the variance and the length-scale, that control the magnitude of variances and smoothness of a positive-valued function \(g({\varvec{u}})\) generated from the GP. In Eq. (4), the length-scale parameter \(l\left({\varvec{u}}\right)\) is a function of \({\varvec{u}}\) and parameterized as \(l\left({\varvec{u}}\right)={\text{exp}}\left(r\left({\varvec{u}}\right)\right)\) with the function \(r({\varvec{u}})\) following the GP with mean 0 and covariance function \({c}_{r}({{\varvec{u}}}_{i},{{\varvec{u}}}_{j};{{\varvec{\xi}}}_{r})\). Finally, a gamma prior is placed on the precision parameter \(\beta\) in Eq. (1).

The covariance function in Eq. (4) is the key in the GTM-LDLV. In general, a covariance function in a GP governs a degree of preservation between the similarity of any inputs, e.G. \({{\varvec{u}}}_{i}\) and \({{\varvec{u}}}_{j}\), and the similarity of their outputs. The heterogeneous variance over the latent space in Eq. (3) can bring locally varying smoothness in resulting manifolds in the feature space. In addition, the variance function is statistically estimated with the hierarchically specified GP prior based on the covariance function \({c}_{r}({{\varvec{u}}}_{i},{{\varvec{u}}}_{j};{{\varvec{\xi}}}_{r})\).

The unknown parameter to be estimated is \({\varvec{\theta}}=\left\{{\varvec{Z}},\beta ,{\varvec{g}},{\varvec{H}},{\varvec{r}}\right\}\). In the GTM-LDLV, the posterior distribution \(p({\varvec{\theta}}|{\varvec{X}})\) is approximately evaluated using a Markov Chain Monte Carlo (MCMC) method. Iteratively sampling from the full conditional posterior distribution for each \(\{{\varvec{Z}},\beta ,{\varvec{g}},{\varvec{H}},{\varvec{r}}\}\), we obtained a set of ensembles that follow the posterior distribution approximately. By taking the ensemble average over the samples from \(p({\varvec{\theta}}|{\varvec{X}})\), the parameters of the GTM-LDLV are estimated. A detailed description of the GTM-LDLV is given in the Supplementary Information section.

Step 2 (Node expansion): to avoid the occurrence of improper assignments of the \(N\) elements to a redundant set of nodes, we adopt a coarse-to-fine strategy. Starting from an initially trained GP model of \(K<N\) at step 1, we refine the model with an increased number of nodes \(K\ge N\). For example, \(5\times 5\) nodes evenly arranged on the area \(\left[-1, 1\right]\times \left[-1, 1\right]\) at step 1 are incremented to \(K=9\times 9\) by placing additional nodes at middle points of the line segments connecting between each node. With the currently given parameters, we can infer the values of \(r\left({\varvec{u}}\right)\) of the covariance function in Eq. (4) at the expanded nodes, \({{\varvec{u}}}_{1},\dots , {{\varvec{u}}}_{K}\). Likewise, the values of \(g\left({\varvec{u}}\right)\) and \({\varvec{h}}\left({\varvec{u}}\right)\) are interpolated. By performing such initialization, we proceed to the next round of the GTM-LDLV.

Step 3 (GTM-LDLV subject to one-to-one assignments): finally, the resulting GTM-LDLV is fine-tuned to obtain a tabular display by running the above procedure subject to a one-to-one matching between the \(N\) elements and the \(K\) nodes. By definition, the conditional posterior distribution of the assignment variables is represented as

$$p\left({\varvec{Z}}|{\varvec{X}},{{\varvec{\theta}}}_{-{\varvec{Z}}}\right)\propto \prod_{n=1}^{N}\prod_{k=1}^{K}{{\text{exp}}\left({-\frac{\beta }{2}\Vert {{\varvec{x}}}_{n}-{{\varvec{y}}}_{k}\Vert }^{2}\right)}^{{z}_{kn}}=\mathrm{exp}\left(-\frac{\beta }{2}{\sum }_{n=1}^{N}{\sum }_{k=1}^{K}{z}_{kn}{\Vert {{\varvec{x}}}_{n}-{{\varvec{y}}}_{k}\Vert }^{2}\right),$$

where \({{\varvec{\theta}}}_{-{\varvec{A}}}\) represents a set of the parameters obtained by removing \({\varvec{A}}\) from \({\varvec{\theta}}\). In the MCMC calculation in step 1, we iteratively draw a sample of \({\varvec{Z}}\) from this distribution. Here, instead of performing the random sampling, we conduct the maximization of the logarithmic posterior with respect to \({\varvec{Z}}\) subject to the constraint of one-to-one assignments. The problem amounts to finding the solution of

$$\begin{array}{c}\underset{{\varvec{Z}}\in A}{\mathrm{max}}-{\sum }_{n=1}^{N}{\sum }_{k=1}^{K}{z}_{kn}{\Vert {{\varvec{x}}}_{n}-{{\varvec{y}}}_{k}\Vert }^{2},\\ A=\left \{{\varvec{Z}}\left|{\sum }_{k=1}^{K}{z}_{kn}=1\right. \left(n=1,\dots ,N\right), {\sum }_{n=1}^{N}{z}_{kn}\le 1 \left(k=1,\dots ,K\right) \bigg \}.\right.\end{array}$$

This is regarded as a transportation problem where the sum of the squared Euclidean distance between an element feature \({{\varvec{x}}}_{n}\) and a node \({{\varvec{y}}}_{k}\) embedded in the feature space is the cost of transporting one item from source \(k\) to destination \(n\) under the constraint \(A\). We use the lpSolve package18 in R19 to solve the transportation problem.

This partially modified MCMC is iterated few times (e.G. \(T=10\)) to make a fine-tuning of the currently given parameters. The assignment variables and the other parameters that exhibit the highest likelihood are chosen to form the final estimate of the PTG. A summary of the algorithm of PTG is shown in Supplementary Algorithm 1.

Interpretation

The PTG autonomously creates a tabular display of the chemical elements according to the estimated \({\varvec{Z}}\). To understand how the element features such as melting point and electronegativity are compressed on the low-dimensional tabular display, each of the features is mapped onto the resulting table. Specifically, we overlay a smoothed heatmap of each feature on the table. With this PTG property landscape20, we can visually understand the distribution of the topographical mapping that indicates how the element features are embedded in the latent space.

Periodic table as an element descriptor

We consider an evaluation basis for the quality of a designed periodic table in terms of a novel view from data science. A periodic table, including Mendeleev’s classic table, can be considered as one of the most primitive descriptors that encodes known element features into the coordinate system of a low-dimensional latent space. Neighbouring elements on a table should behave similarly and possess similar physicochemical properties. Inspired by such an idea, we consider the use of a periodic table as a descriptor of chemical elements in a task of predicting materials properties based on machine learning21. The periodic table is then evaluated quantitatively based on the predictive performance of the descriptor.

For a given table, its coordinates \({{\varvec{u}}}_{k(1)}, \dots , {{\varvec{u}}}_{k(N)}\) of the nodes to which the \(N\) elements are assigned are used as a set of element descriptors. For a compound \(S\), its fraction of the \(N\) elements is denoted by \({w}_{1}\left(S\right),\dots , {w}_{N}\left(S\right)\) where \(0\le {w}_{n}\left(S\right)\le 1\) and \({\sum }_{n=1}^{N}{w}_{n}\left(S\right)=1\). The compositional descriptor of \(S\) is calculated by \({\varvec{\phi}}\left(S\right)={\sum }_{n=1}^{N}{w}_{n}\left(S\right){{\varvec{u}}}_{k(n)}\). With this descriptor, we derive a prediction model \(Y=f\left({\varvec{\phi}}\left(S\right)\right)\), which is trained in \(m\) training instances \({\left\{{Y}_{i}, {S}_{i}\right\}}_{i=1}^{m}\), that describes a physicochemical property \(Y\) as a function of the descriptor \({\varvec{\phi}}\left(S\right)\) for any given compound \(S\). Descriptors exhibiting higher predictability should be recognised as providing more efficient compression performances on the \(N\) elements. For comparison, the same analysis was performed using two-dimensional coordinates of the standard periodic table, PCA, and t-SNE, respectively.

Data: element features

The element feature set was extracted from XenonPy22, which is a Python library for materials informatics, by using an Application Programming Interface (API) (see the XenonPy website23). The original dataset consisted of 74 features of 118 elements. Since elements with large atomic numbers contained many missing values, we selected 54 elements with the atomic number 1–54 corresponding to hydrogen to xenon that are considered sufficient to retain the periodic rule. After removing features that contained one or more missing values, the dataset was reduced to 39 features of 54 elements. For the 54 × 39 data matrix, each feature (column) was standardized to have mean 0 and variance 1. A heatmap display of the data matrix and a detailed description of the 39 features are provided in Supplementary Fig. S1 and S2, respectively.

Analysis procedure

We performed the PTG on two different layouts of nodes, square, and three-dimensional conical layouts. In the square layout of \(L=2\), we set \(K=25\) in the first step of PTG in which the \(5\times 5\) nodes were evenly arranged on the area \(\left[-1, 1\right]\times \left[-1, 1\right]\). In the second step, we increased the number of nodes to \(9\times 9\) by placing new nodes at the middle points of the line segments connecting between each node. In the conical layout of \(L=3,\) we first used a set of nodes with \(K=25\) that were arranged uniformly on the surface of the cone placed in the area \(\left[-1, 1\right]\times \left[-1, 1\right]\times \left[-1, 1\right]\). The cone was sliced into 4 sections in the same height along the vertical axis. Then, 1 (vertex), 4, 8, and 12 (bottom) nodes were uniformly placed on the outer part of the 4 cut surfaces. In the next step, the number of slices was increased by 7, and 1 (vertex), 4, 8, 12, 16, 20, and 24 (bottom) nodes were uniformly arranged in the same way. In both the cases, we set \({{\varvec{\xi}}}_{g}={{\varvec{\xi}}}_{r}=\left(1/3, 3\right)\), the number of iteration in MCMC was set to \(T=\mathrm{10,000}\) with the burn-in step \({T}_{b}=5000\), and the number of iteration in the third step of fine-tuning was set to \(T=10\). See the Supplementary Information section for further details on the hyperparameter settings and analysis procedure.

The PTG algorithm was implemented using R codes, which are available at24 with the element dataset. Readers can run the PTG algorithm with the element data used in this paper. As a demonstration, the PTG was performed on another three different layouts: a rectangular table with \(5\times 18\) equally spaced grids, which is same to the layout of the standard periodic table, and two three-dimensional layouts taking the forms of cylinder, and cubic, respectively. The results are shown in Fig. S8.


South Africa: Cabinet Approves UN ICERD Periodic Report

South Africa: Cabinet approves UN ICERD periodic report

Cabinet has this week approved the submission of South Africa’s ninth to 11th periodic country report on the UN International Convention on the Elimination of All Forms of Racial Discrimination (ICERD).

This was in accordance with South Africa’s commitment to the ICERD in 1994, which was ratified on 10 December 1998.

This was confirmed by acting Minister in the Presidency, Khumbudzo Ntshavheni, while addressing the media on the outcomes of this week’s Cabinet meeting.

“The report outlines progress made by South Africa in putting in place legislative, judicial and administrative measures to eliminate all forms of racial discrimination,” said the Minister.

She said the periodic report focuses on the progress made in advancing equality, fighting xenophobia and other related intolerance, prevention of hate crimes, and highlights challenges that still remain. After its presentation to the relevant body, the report will be made public.

In the meeting, Cabinet also approved the tabling of the International Convention on the Suppression and Punishment of the Crime of Apartheid to Parliament for accession.

“This is done in terms of Section 231(2) of the Constitution of the Republic of South Africa of 1996. The convention, among others, declares apartheid as a crime against humanity and that it posed a serious threat to international peace and security,” she said.

Once approved by both houses of Parliament, the Department of International Relations and Cooperation will deposit the instrument of accession with the UN, said the Minister.

Also agreed to in the Cabinet meeting was the amendment of the agreement between South Africa and the Netherlands on social security.

The cooperation agreement on social security was signed in The Hague in May 2001.

The agreement facilitates the export of social security benefits for the respective citizens.

“The Netherland Social Security Policy has made amendments to its export social security in respect to the Dutch children. The proposed amendment is to align the agreement to these changes,” said the Minister. – SAnews.Gov.Za

This story has been published on: 2021-02-27. To contact the author, please use the contact details within the article.


Experimental Tests Of Relativistic Chemistry Will Update The Periodic Table

All chemistry students are taught about the periodic table, an organization of the elements that helps you identify and predict trends in their properties. For example, science fiction writers sometimes describe life based on the element silicon because it is in the same column in the periodic table as carbon.

However, there are deviations from expected periodic trends. For example, lead and tin are in the same column in the periodic table and thus should have similar properties. However, whilst lead-acid batteries are common in cars, tin-acid batteries don't work. Nowadays we know that this is because most of the energy in lead-acid batteries is attributable to relativistic chemistry but such chemistry was unknown to the researchers who originally proposed the periodic table.

Relativistic chemistry is difficult to study in the superheavy elements, because such elements are generally produced one at a time in nuclear fission reactions and deteriorate quickly. Nevertheless, having the ability to study the chemistry of superheavy elements could uncover new applications for superheavy elements and for common lighter elements, such as lead and gold.

In a recent study in Nature Chemistry, researchers from Osaka University studied how single atoms of superheavy rutherfordium metal react with two classes of common bases. Such experiments will help researchers use relativistic principles to better utilize the chemistry of many elements.

"We prepared single atoms of rutherfordium at RIKEN accelerator research facility, and attempted to react these atoms with either hydroxide bases or amine bases," explains Yoshitaka Kasamatsu, lead author on the study. "Radioactivity measurements indicated the end result."

Researchers can better understand relativistic chemistry from such experiments. For example, rutherfordium forms precipitate compounds with hydroxide base at all concentrations of base, yet its homologues zirconium and hafnium in high concentrations. This difference in reactivity may be attributable to relativistic chemistry.

"If we had a way to produce a pure rutherfordium precipitate in larger quantities, we could move forward with proposing practical applications," says senior author Atsushi Shinohara. "In the meantime, our studies will help researchers systematically explore the chemistry of superheavy elements."

Relativistic chemistry explains why bulk gold metal is not silver-colored, as one would expect based on periodic table predictions. Such chemistry also explains why mercury metal is a liquid at room temperature, despite periodic table predictions. There may be many unforeseen applications that arise from learning about the chemistry of superheavy elements. These discoveries will depend on newly reported protocols and ongoing fundamental studies such as this one by Osaka University researchers.

More information: Co-precipitation behaviour of single atoms of rutherfordium in basic solutions. Nature Chemistry. DOI: 10.1038/s41557-020-00634-6

Citation: Experimental tests of relativistic chemistry will update the periodic table (2021, February 16) retrieved 26 February 2021 from https://phys.Org/news/2021-02-experimental-relativistic-chemistry-periodic-table.Html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

  1. 👆👅👶

MODERN Designer - Digital Art

MODERN Designer - Digital Art