## Abstract

As semiconductor electronic devices scale to the nanometer range and quantum structures (molecules, fullerenes, quantum dots, nanotubes) are investigated for use in information processing and storage, it becomes useful to explore the limits imposed by quantum mechanics on classical computing. To formulate the problem of a quantum mechanical description of classical computing, electronic device and logic gates are described as quantum sub-systems with inputs treated as boundary conditions, outputs expressed as operator expectation values, and transfer characteristics and logic operations expressed through the sub-system Hamiltonian, with constraints appropriate to the boundary conditions. This approach, naturally, leads to a description of the sub-systems in terms of density matrices. Application of the maximum entropy principle subject to the boundary conditions (inputs) allows for the determination of the density matrix (logic operation), and for calculation of expectation values of operators over a finite region (outputs). The method allows for an analysis of the static properties of quantum sub-systems.

## 1. Introduction

The harnessing of quantum systems to perform new methods for information processing has attracted interest in recent years; an overview of the subject may be found in the book by Nielsen & Chuang (2000). In this study, the complementary problem of how to design classical computing schemes from quantum mechanical systems is undertaken. The first step in such a study is the formulation of quantum mechanics in a manner that allows for a physical description of transistors, logic gates and circuits and thereby explores the fundamental limits imposed by atomic scale laws of nature on classical computation. A method is presented allowing for an analysis of the static (dc) properties of physical device elements operating near quantum limits. The partitioning of a quantum system into sub-systems, naturally, leads to a description relying on density matrices. By invoking the maximum entropy principle, the steps needed for determination of the form of the density matrix required to describe a sub-system is outlined. Simple quantum models for a sub-system are introduced and conditions imposed for their interpretation as logic gates is studied. The procedure is then extended to quantum mechanical treatment of general electronic systems and related to recent calculation of current–voltage characteristics for molecular junctions.

## 2. Quantum sub-systems as circuit blocks

An electronic circuit connected to a battery may be viewed as a closed quantum mechanical system. If the circuit is closed at time *t*=0, a current begins to flow. The wavefunction of the entire system if initially prepared in a pure state is governed by the time-dependent Schrödinger equation(2.1)with *H*_{C} the Hamiltonian operator for the entire circuit (atomic units *ℏ*=*m*_{e}=*e*=1 are used). If it is assumed that the battery has sufficient capacity to maintain a steady-state, the circuit may be approximated by the time-independent Schrödinger equation for times short compared to the battery lifetime(2.2)with *E*_{C}, the total energy of the system composed of the electronic circuit plus battery. If at time *t*=0, the circuit plus battery are not in a pure state, then a statistical density operator must be introduced and the state of the system is represented as a mixture of the eigenfunctions of *H*_{C}. But in any event, a closed system treatment of a circuit is not useful for a description of the circuit components, nor does it facilitate the application of the principles of design for the entire circuit, as the overall circuit behaviour is a result of the connection of individual components. In reality, the components have no absolute physical meaning as they, their connections, and the power source form one physical system. However, the circuit can be partitioned into sub-systems and the behaviour of the sub-systems can be examined from a quantum mechanical viewpoint. Identification of functional blocks within the physical system then leads to a means for hierarchical design.

In general, a quantum mechanical sub-system cannot be described as a pure state and the introduction of the reduced density matrix is required. For the total circuit wavefunction, an operator expectation value is given by(2.3)The total system is partitioned into {*q*} and {*Q*}, with *q* denoting the degrees of freedom on a sub-system and *Q* denotes all other degrees of freedom. The expectation value of an operator operating only on the sub-system may be written as(2.4)In turn, a reduced density matrix can be defined as(2.5)The expectation value of the operator may now be written(2.6)It should be noted that the statistical density matrix operator and the reduced density matrix operators are related, in that the former is obtained by integrating over bath degrees of freedom (represented by *Q* above) leaving a statistical density matrix governing (sub-) system (denoted by *q* above) expectation values. A statistical average over a system weakly coupled to bath degrees of freedom is equivalent to a reduced density matrix description as shown in the review by ter Haar (1961).

The output state(s) *O*_{k} of a device(2.7)is (are) determined by the inputs *I*_{j} in a deterministic manner through a function *f* governing the operation (Boolean logic gate, analogue signal processor, etc.) performed by the sub-system on the inputs. For a sub-system to act as a circuit component, inputs must correlate to outputs, a general way to express the correlation is through the expectation values:(2.8)In other words, the inputs and outputs must be specified in terms of the sub-system observables, or operator expectation values, and these quantities must correlate through a density matrix. If the input and output are specified in terms of local operators, then in general the density matrix should not factorize over input and output degrees of freedom, for if the density matrix factorizes:(2.9)Alternatively, if the density matrix factorizes between input and outputs, a non-local operator should couple the inputs and outputs, for example,(2.10)If the expectation value of only the output degrees of freedom is computed, the interdependence with the inputs must be carried by the reduced density matrix or by the operator describing the output degrees of freedom.

In figure 1*a*, a cascaded system is shown; if it is to function usefully as a circuit composed of functional blocks, the reduced density matrices for the two individual sub-systems should factorize. For if the circuit density matrix factorizes over sub-systems, the following relation holds:(2.11)with the last line indicating that sub-system *B* behaves as specified (the sub-system density matrices are taken to be normalized ). In figure 1*b*, a system of cascaded gates with fan-out is shown. It is noted that if the blocks B and C behave as prescribed that the outputs *O*_{1} and *O*_{2} are not correlated in the sense defined, for(2.12)Similar relations hold for interconnected gates with multiple inputs.

## 3. Input/output

The output of a sub-system is the response due to input. This statement implies that the input degrees of freedoms are control variables set externally to the sub-system, whereas the output is determined solely by the state of the input variables. The statement, seemingly innocuous, leads to implications for the definition of input and output variables for a sub-system and implies the form needed for a quantum mechanical treatment of a circuit component. The input degrees of freedom must be interacting with external sources and their state must correspond to the state of external variables. Output degrees of freedom need only to interact strongly with other sub-system degrees of freedom and this correlation must be strong enough to hold when the output interacts with subsequent stages. If the sub-system is to function usefully in a circuit, they must be cascaded and the outputs should be able to set the state of a subsequent stages input and to display fan-out.

If the sub-system is viewed in isolation, the input states correspond to information known for a subset of the degrees of freedom composing the sub-system, whereas the output corresponds to an expectation value over select degrees of freedom that interact with the inputs. If it is assumed that only expectation values over the input degrees of freedom are known, and that the form of all interactions are given through a Hamiltonian operator for the sub-system, the best estimates for the output expectation values are then given by the maximum entropy estimate as applied to physical systems first formulated by Jaynes (1957*a*,*b*), in a re-statement of the method given by Shannon (1948) for determining probability distributions based upon partial knowledge. The procedure is straightforward in the context of the present discussion. Following Shannon (1948) and Jaynes (1957*a*,*b*), the entropy of a probability distribution is quantified by(3.1)The maximum entropy estimate consists of maximizing the entropy subject to constraints corresponding to known values of system or sub-system expectation values(3.2)with the Lagrangian multipliers *β*, *λ*_{i} chosen to enforce the constraint conditions. The maximum entropy condition requires the density matrix take the form(3.3)and the normalization condition fixes *λ*_{0}. For simplicity, the normalization condition is implicit in the following.

Once the density matrix for a sub-system is determined, output expectation values may be expressed as(3.4)In this form, the interpretation of a quantum sub-system as a logic gate or other information processing device becomes apparent: the input states appear as constraints to the density matrix (enforced by the Lagrangian multipliers *λ*_{i}), the response to the inputs is governed by the sub-system Hamiltonian , which for the purposes of the present discussion defines the operation of the gate. As written, all external interactions constrain the expectation values for input degrees of freedom, however, the form of the density matrix may be generalized to allow for flexibility in choosing the operation of a device or gate by external coupling to the sub-system Hamiltonian:(3.5)External interactions with the sub-system may be chosen to modify the action of the sub-system Hamiltonian, and thereby the response of the device to the inputs.

## 4. Ising gates

In this section, application of the density matrix formalism to the analysis of finite systems for information processing is demonstrated. A simple model of a finite set of interacting spins is explored within the framework outlined, and the application of external constraints and the sub-system response to the constraints is interpreted in terms of logic gates. To perform Boolean logic design, a functionally complete set of logic gates is required. The AND and NOT operations form a functionally complete set and demonstration of their operation with the spin model will be shown. To couple the individual logic gates to perform an arbitrary logic function, a connection scheme is required whereby outputs from one gate can be propagated to the inputs of the next set of gates, which also implies the necessity of fan-out. To demonstrate this possibility, a spin wire is introduced.

The basis for the logic gates and their connections is a set of spins governed by nearest neighbour spin–spin interactions as described by the two-dimensional Ising model. This simple model demonstrates in a transparent manner features needed for design of computing elements with a quantum mechanical treatment of a physical system comprising the element. The basic Hamiltonian defining the Ising model for a set of *N* spins interacting with an external magnetic field *B*_{z} and with nearest neighbour interactions is given by(4.1)which at zero external magnetic field reduces to(4.2)It is understood that the spin–spin sum is restricted to nearest neighbour spins. It is further assumed that the spins only align parallel or anti-parallel to the external field (at vanishing external field, it is still assumed that the spins only align in one spatial direction). The spin eigenvalues are(4.3)in units of *ℏ*/2. The Ising systems in the following are simulated using the algorithm given by Metropolis *et al*. (1953), whereby the thermal fluctuation of the spins is treated in the canonical ensemble. The system Hamiltonian is modified to account for the constrained degrees of freedom consistent with the maximum entropy estimate. Inputs degrees of freedom are constrained to fixed values, for example, if the majority of inputs are aligned ‘up’, then this state may be interpreted as a logical one; conversely, spins predominately aligned ‘down’ are interpreted as a logical zero. Physically, the setting of the input states could occur by the application of a magnetic field that is spatially localized to interact only with the input degrees of freedom. Output degrees of freedom interact with the input spins through the two-dimensional Ising lattice of spins. Output expectation values are calculated as ensemble averages from the Metropolis algorithm using the maximum entropy density matrix for a subset of the spins identified to be the output signal. Use of the canonical ensemble assumes the finite system is in contact with a thermal reservoir, and the final state of the sub-system and fan-out or signal gain is achieved at the expense of coupling the wire to a ‘supply’, in this case the thermal reservoir. Note, however, once the steady-state is found there is no net energy flow or power required. This is similar to a conventional CMOS inverter where power is required for switching, but is not required to maintain a steady-state.

### (a) Ising wire

The purpose of a wire is to allow propagation of a signal from the output of one gate to the input of one or more gates. In figure 2*a*, a schematic of a two-dimensional array of 20×20 Ising spins is shown. The first column of spins is used as the input signal and the spins in this column have either been set to zero (removed) or constrained to a fixed value. As shown within the figure, the set of spins within the box at the left-hand side of the figure are selected as the inputs, and likewise a set of spins on the right-hand side of the figure have been selected as the output.

In figure 3, the output magnetization of the system, that is, the expectation value of the output spins is plotted as a function of the ratio of the coupling strength to the thermal energy *J*/*k*_{B}*T*. For fixed coupling and high temperatures, thermal fluctuations dominate and the correlation between input and output is lost. At lower temperatures, spontaneous magnetization of the system occurs due to the ferromagnetic coupling in the Hamiltonian (parallel spins are favoured energetically). As a consequence, when the inputs are set to a given logic state, the lattice spins directly coupled to the input preferentially align with the input spins, and in this manner the input state propagates across the spin domain to the output. It should be noted, however, as the temperature is further decreased that the spin configurations become ‘frozen in’ in that the fluctuations become too small to propagate the signal efficiently across the lattice. Hence, there is a window of operation between the high and low temperature regimes in which the input spins readily correlate to the output spins. It also noted that fan-out is achieved in that a larger number of spins at the output than set at the input can be aligned to the input state.

### (b) Ising NOT gate

A modification to the Ising wire enables the logical NOT operation. The coupling between the input spins to neighbouring lattice spins is chosen to be anti-ferromagnetic that is the sign of the spin–spin coupling between the inputs and lattice degrees of freedom is reversed. Hence, for couplings *J*/*k*_{B}*T*, where the input correlates to the output, the complement of the input is propagated to the output. A representative state of the spin gate operating as an inverter is shown in figure 2*b* for the input state fixed at logical zero. From symmetry, it is seen that flipping of the input and lattice spins results in the truth table for the NOT operation.

### (c) Ising AND gate

In figure 4, the input to the spin lattice is changed to allow for two separate regions to act as inputs and it is assumed that the spin states for the two regions may be set independently. Thus four logical input states (00), (01), (10) and (11) can be set as constraints to the maximum entropy density matrix. In the quantum wire simulations, six input spins were constrained to be aligned parallel. For the two input gate example, 12 spins are constrained to be aligned. With both inputs constrained to be in the same state (0,0) or (1,1), the Ising lattice will behave in the same fashion as the quantum wire and propagates the state of both sets of input spins to the output degrees of freedom.

The situation becomes slightly more complex when the two inputs are aligned anti-parallel with respect to one another. Now an equal number of inputs spins are up and equal number are aligned down. Spins nearest the inputs align with input states, but a spin boundary forms between domains with spins up and spins down. At the boundary, all neighbouring spins cannot align and a Bloch wall is formed resulting in a high energy state. The longer the boundary, the higher the system energy. As a result, the boundary between spin domains reduces in length as much as possible, and this results in a small spin boundary surrounding one of the inputs. From symmetry, states predominately aligned up or predominately aligned down are energetically equivalent. Hence, instantaneous magnetization of the lattice will determine the output state of the gate, and for the inputs (01) and (10): the output spins will not correlate to inputs.

Thus, for the device topology considered, modification to the sub-system Hamiltonian has to be achieved to engineer the AND function. This is accomplished by introduction of a magnetic field (in addition to the fields used to set the input states) coupling to all lattice spins, resulting in a preferential direction to the spin magnetization. The *B* field is chosen to align the spins down in the absence of input constraints resulting in a logical zero at the output. If both inputs are constrained to logical zero, the output state of the device will remain a logical zero. If one of the inputs is changed to logical one, the device will not change state *if* the energy loss due to spins next to the input with spins aligned up is less than the energy of the lattice spins coupled to the external field. If a second input's spins becomes aligned up, the energy associated with the spins coupled to anti-parallel spins at the inputs will double. If this doubling in energy is greater than the energy associated with coupling of the lattice spins to the external *B* field, then the lattice spins will change state to up and this will propagate to the output spins.

The energy change associated with flipping *N* spins aligned in an external magnetic field is:(4.4)

The energy associated with having *N*_{I} input spins forced to be anti-parallel relative to neighbouring lattice spins is:(4.5)

Hence, for the above scheme to work the ration of the magnetic field to the coupling strength must be chosen to satisfy(4.6)but with the lower limit to the magnetic field strength chosen sufficiently large to overcome randomization of lattice spins due to thermal fluctuations. This analysis easily identifies the strength of interaction needed to achieve the input coupling to the spin lattice, and sets a temperature and coupling range for operation of the device. In figure 4, the operation of the AND device is summarized for a gate with two sets of six inputs spins, 19×20 total lattice spins, and an external magnetic field coupling *B*=−0.0175*J*, whereby is it seen that the application of the external magnetic field has allowed for the AND operation to be achieved. The output magnetization is plotted as a function of the external magnetic field in figure 5 for the case both inputs set to logical one, where it is seen that the value of the external magnetic field is chosen to be below the transition point above which the spins become aligned to the external field.

### (d) Ising OR gate

The OR gate can be implemented similarly to the AND gate by application of an external magnetic field. For this case, the field is chosen to align spins up in the absence of constraints on the gate inputs. In other words, the field direction is chosen to be the opposite of that for the AND gate. The magnitude of the magnetic field strength is chosen to be the same as for the AND gate and by symmetry, the spins stay aligned with the external field until the coupling for two sets of inputs spins overcomes the external field energy, which is when two inputs become set to logical zero. In this configuration, the OR logic function is achieved. Note that the same spin topology can achieve either the AND or OR operation, by reversal of the external magnetic field. Hence, the function of the sub-system is determined by the sign of the external B-field in(4.7)The changing of the direction of the magnetic field serves to complement all the logic inputs and outputs in the AND truth table resulting in the OR operation, and vice versa.

### (e) Cascaded gates

It is also necessary to consider cascaded gates. For the simple example of the Ising gates, external inputs are again treated as constraints and internal nodes are treated as part of the sub-system, with the output of the last gate determining the sub-system (cascaded gates) response. This reflects that the portioning is not unique and one can investigate either a component or an entire circuit with the same formalism. For example, a spin wire followed by an inverter and two cascaded inverters have been studied. For the design parameters given above, the circuits behaved as intended. If this were not the case, the analysis permits an identification of failure mechanisms and allows a means for designing around these mechanisms.

### (f) Gate design and defective systems

The above description of spin gates has been put forward to demonstrate the design and analysis of nanoscale logic devices using reduced density matrices in conjunction with maximum entropy estimates. Using the concept that inputs, logic operations and outputs may be treated as constraints, sub-system Hamiltonians, and expectation values, respectively, design and simulation of a complete set of logic gates from a quantum mechanical perspective has been easily achieved. Inherently quantum systems can be designed to perform classical operations. This analysis becomes particularly important when the circuit component is operating at the limits of classical operation. It becomes of interest to ask what are tolerances to defects or other instabilities in the gate or circuit element design, and to predict sensitivities to fabrication, defects, input thresholds, thermal fluctuations, and so forth.

To demonstrate one such study, the case of the spin wire is re-examined by considering the effect of lattice vacancies on its operation. In figure 6, a sequence of simulations with increasing number of randomly chosen lattice defects is shown. At each defect density, five different random distributions for the vacancies are chosen. As the defect density is increased, the ability of the input to correlate to the output is reduced and for large defect densities, the wire is incapable of propagating the input signal. If a threshold for the magnetization is chosen such that |〈*M*〉|>1 for a valid output state, it is concluded that the operation of the wire is only possible if the defect density does not exceed a few percent of the total lattice spins.

## 5. Constrained many-electron systems

For semiconductor devices, material systems determine the form of the sub-system Hamiltonian and the relevant degrees of freedom for input and output are electronic degrees of freedom and the quantities derived from them (current, voltage). Considering only the electronic motion, the non-relativistic Born–Oppenhemier Hamiltonian for an *N* electron system may be written as(5.1)with *Ψ* the *N*-electron wavefunction, and **r**_{i} and *s*_{i} are the electron positions and spins, respectively. The Hamiltonian operator is the sum of kinetic and Coulomb potential energy contributions(5.2)where *U*(* r*) is the attractive potential energy of an electron in the Coulomb field of the

*A*atomic nuclei located at positions

**R**_{a},

*a*=1, …,

*A*:(5.3)The energy expectation for the

*N*-electron system may be written as(5.4)The spin-averaged one-body reduced density matrix corresponding to a wavefunction

*Ψ*is defined by(5.5)The inclusion of the factor

*N*in the definition of

*ρ*insures that if the wavefunction is normalized 〈

*Ψ*|

*Ψ*〉=1, then the diagonal density matrix

*ρ*(

*;*

**r***) is the electronic density at point*

**r***. Similarly, the spin-averaged two-body reduced density matrix is defined by(5.6)With the factor of*

**r***N*(

*N*−1), the trace of the two-body reduced density matrix(5.7)yields the probability density for observing an electron pair occupying the positions

**r**_{1},

**r**_{2}. The energy expectation value in terms of the one- and two-body reduced density matrices becomes(5.8)Note that the integration limits are over all space and the one-body term is . The energy expectation may be compactly re-written as(5.9)and it is useful to define an energy density:(5.10)A sub-system is defined by considering a finite volume , and the energy associated with this region is given by(5.11)Note that the local energy density includes all interactions with regions external to the sub-system domain through the two-body reduced density matrix

*Γ*, which contains all Hartree (Coulomb), exchange and correlation terms. The two-body interactions may be partitioned into those arising solely from the sub-system region and all others:(5.12)

For the purposes of discussion, a semi-classical (Hartree) approximation for the sub-system interacting with the external regions is introduced. For a simple product wavefunction(5.13)of one-electron wavefunctions *ϕ*_{i}, the two-body density matrix factorizes in terms of the one-particle density matrix:(5.14)The energy associated with the regions outside of the sub-system and their interactions with the sub-system may be written within this approximation as(5.15)The one-body term can be re-defined as(5.16)and the energy density re-written as(5.17)The interactions with the external regions could, for example, describe the interaction of a device with electrical contacts. As written, regions external to the sub-system are described within a Hartree approximation implying the sub-system region must be chosen large enough to include all important exchange and correlation effects. For more complex contact-device interactions, a more accurate (higher order) approximation to the reduced two-body density matrix governing external interactions is needed.

When calculating the energy on a region, the boundary conditions must also be specified. For atoms and molecules, vanishing of the wavefunction at infinity is one condition, periodic boundary conditions are appropriate for crystalline systems. For a statistical analysis of a quantum system, weak coupling to external degrees of freedom is assumed. None of these conditions hold for an arbitrary quantum sub-system: the wavefunction will in general not vanish at the boundaries and is not periodic, and coupling to external degrees of freedom is not negligible. To discuss how the framework for quantum system design can be applied to electronic systems, an approach to electron transport recently developed by Delaney & Greer (2004*a*,*b*) is reviewed and placed into the context of the present discussion.

For two terminal transport simulations, the action of the battery or power supply is modelled in terms of reservoirs. For example, in figure 7 a schematic of two reservoirs in contact with a sub-system is depicted and are referred to as either the left or right reservoir. The assumption is that the reservoirs are locally in equilibrium, but that chemical potentials between the left and right gives rise to a potential difference:(5.18)Typically, a single particle picture is then invoked, and Fermi distributions are assumed for both reservoirs(5.19)with single particle energies *ϵ*, density of states *n* and Fermi–Dirac distributions *f* in the reservoirs. States lying within the voltage bias window *μ*_{L}–*μ*_{R} may propagate across the sub-system from occupied states in one reservoir into empty states within the other reservoir. A transmission coefficient *T* is associated with each state and the current flow as a function of voltage is calculated from(5.20)It should be noted that this formula is fundamentally incapable of treating correlated electrons (correlation is here defined as contributions to the wavefunction beyond a single Slater determinant). However, for molecular scale electron devices correlation is known to play an important role and a means of treating transport within correlated systems is needed. On the other hand, the essential feature of the reservoirs is that the electrons emerge with a distribution fixed by a local equilibrium condition, but that they can absorb electrons with any energy or momentum distribution.

A more general means for imposing reservoir boundary conditions has been developed by Frensley (1990) for non-correlated electrons and has been recently extended to the treatment of correlated many-electron systems by Delaney & Greer (2004*a*,*b*). In the many-electron formalism, the reservoir boundary conditions are imposed through the first order Wigner function *f*_{W}(* q*,

*) derived from the*

**p***N*-electron wavefunction:(5.21)The Wigner function is real with

*serving the role of a position and*

**q***as momentum. The transform yields as much of a phase-space representation of the electronic motion described by*

**p***Ψ*as permitted by the Heisenberg uncertainty principle and thus cannot be expected to have all the properties of a classical probability density. Most notably, the Wigner function need not always be positive as strictly required for a probability interpretation. However, the Wigner function is known to provide a useful, although not unique, phase-space portrait of quantum mechanics. To treat systems composed of a left reservoir, quantum sub-system, and a right reservoir as shown in figure 7, it is essential to identify electrons incoming from the left and right. From a many-body wavefunction

*Ψ*, this is a difficult concept to formulate, but with the aid of the Wigner function becomes a straightforward process. Planes perpendicular to the current flow may be chosen within the reservoirs. Using the Wigner position co-ordinate, the planes may be denoted by

*q*

_{L}and

*q*

_{R}. The Wigner function

*f*

_{W}(

*x*,

*y*,

*q*

_{L},

*p*

_{x},

*p*

_{y},

*p*

_{z}) for

*p*

_{z}>0 is computed for the incoming electrons located at any point on this plane. To simplify the analysis, this quantity is integrated over the in-plane co-ordinates resulting in . With this function, the net inwards momentum flow from the left contact is specified. The Wigner function at

*q*

_{L}as computed from the initial equilibrium (no applied voltage) wavefunction and its planar average is evaluated for a chosen number of momenta values

*p*

_{i}along the positive

*p*

_{z}axis. A similar procedure for the right contact is followed with the distinction that the electrons incoming from the right are those with

*p*

_{z}<0. Using the Wigner distribution, properties of the reservoirs can be extracted from the wavefunction for the system at zero current. It is assumed that the Wigner functions and determine the properties of the reservoirs, and these characteristics are preserved when the system is driven away from equilibrium. The Wigner function can be calculated at a point in phase space from(5.22)with , the

*N*-electron density matrix, and the values characteristic of the reservoirs are treated as constraints to the sub-system wavefunction. It is assumed that these values remain constrained to their equilibrium values as the reservoirs are driven out of equilibrium with respect to one another, for example by the application of an external electric field: . The resulting maximum entropy density matrix becomes (up to normalization)(5.23)The equilibrium conditions on the reservoir along with the application of an external electric field determine the voltage across the sub-system; this is interpreted at an input to a device. Identifying the Lagrange multiplier

*β*at the inverse temperature and taking the low temperature limit,(5.24)

The wave function *Ψ* minimising the constrained sub-system energy is the best guess based upon the maximum entropy estimate. Of course, to solve the problem the Lagrangian multipliers must be found, this is achieved by treating the problem as a constrained nonlinear optimization problem, details of which can be found in the papers of Delaney & Greer (2004*a*,*b*). The current density is expressed using the one-body density matrix as(5.25)The current density is integrated over planes normal to the net current flow. The inputs to the system are, in this case, the external voltage and the constraints specifying the reservoir boundary conditions, and the response of the sub-system is the physical current. As before, the constrained Hamiltonian governs the sub-system response through the maximum entropy estimate for the density matrix. Current can be controlled via application of external electric fields (voltage bias, gating), thus modifying the response (current) of the sub-system. A quantum mechanical prescription for electron device design results.

## 6. Conclusions

Treatment of quantum sub-systems by use of the maximum entropy principle allows us to define quantum mechanical regions as functional blocks. Selection of a sub-set of the degrees of freedom comprising the sub-system and constraining them to fixed values yields information to be included in the maximum entropy estimate for the density matrix. Expectation values over regions or degrees of freedom interacting with the constraints as governed by the density matrix determines the response of the sub-system. In this scheme, the constraints act as inputs, the reduced density matrix describes the operation of the sub-system on the inputs, and use of the maximum entropy density matrix to calculate expectation values determines the output of the sub-system.

Using this approach, application of the method to design of logic gates with a simple model for spin systems was performed. The procedure was then developed for the case of a material system modelled by the non-relativistic *N*-electron Coulomb Hamiltonian operator. The means for specifying the constraints to the maximum entropy density matrix was shown for electronic transport and related to recent calculations for molecular scale junctions.

This treatment of a sub-system lends itself to technology design, in that the requirements for a quantum mechanical sub-system to perform a classical function, such as to act as a Boolean gate, can be explored. As well, the fundamental limits of the region to perform a prescribed operation subject to the underlying quantum behaviour can be investigated.

## Acknowledgements

This work has been supported by Science Foundation Ireland. We are grateful to one on the reviewers of this paper for drawing our attention to the work of Klein and Meijer.

## Appendix A Maximum entropy versus minimum entropy production

In the approach described, the maximum entropy principle is used to determine the stationary state of the device region. The *least biased* density matrix for the sub-system which is consistent with the constraints is then found, where the Shannon entropy *S* is used to quantify the degree of information contained in the density matrix. This can be compared with a general theorem from Prigogine (1947) stating the steady state of an irreversible process is characterized by a minimum rate of entropy production. For example, in Klein & Meijer (1954), a quantum mechanical calculation is performed for two identical containers *C*_{1} and *C*_{2}, each containing an ideal gas and connected by a thin capillary allowing particle flow. The containers are maintained at different temperatures *T*_{1}, *T*_{2} by heat baths. Applying rate equations, Klein and Meijer show that for small temperature differences a set of occupation numbers *q*_{i}, *p*_{i} of the left and right boxes minimizes the entropy production d*S*_{total}/d*t* on the *entire system* (including heat reservoirs) if and only if the *q*_{i},*p*_{i} result in a stationary solution of the Schrödinger equation.

Are the two approaches consistent? Is it possible to find a set of constraints on the device region which yields the same density matrix when device entropy is maximized as is obtained from the minimum entropy production approach? A difference in the two approaches is that entropy of the sub-system *S* includes only local contributions from near the quantum device, while *S*_{total} of Klein and Meijer also includes the entropy of the heat reservoirs. In fact, examining their entropy production, it occurs entirely in the heat reservoirs: as the *q*_{i},*p*_{i} are stationary the expression will not change in time. This global-to-local link is what Klein and Meijer have shown: they relate minimum production of global entropy to local stationarity. Therefore, if constraints are specified such that each density matrix element of the device region (e.g. the containers *C*_{1},*C*_{2}) is stationary under their master equation, and maximize the local entropy one necessarily obtains that the global entropy production is minimized. This is assuming that there is only one stationary solution of the density matrix; the constraints then uniquely specify one state and the maximization of local entropy are trivial. For several stationary states, the entropy maximization will produce a statistical mixture of these, a case not discussed by Klein and Meijer.

- Received August 11, 2004.
- Accepted August 3, 2005.

- © 2005 The Royal Society