A Solvable Dynamic Principal-Agent Model with Linear Marginal Productivity

We study how to design an optimal contract which provides incentives for agent to put forth the desired effort in a continuous time dynamic moral hazard model with linear marginal productivity. Using exponential utility and linear production, three different information structures, full information, hidden actions and hidden savings, are considered in the principal-agent model. Applying the stochastic maximum principle, we solve the model explicitly, where the agent’s optimization problem becomes the principal’s problem of choosing an optimal contract. The explicit solutions to our model allow us to analyze the distortion of allocations. The main effect of hidden actions is a reduction of effort, but the a smaller effect is on the consumption allocation. In the hidden saving case, the consumption distortion almost vanishes but the effort distortion is expanded. In our setting, the agent’s optimal effort is also reduced with the decline of marginal productivity.


Introduction
Private information is a significant feature in many economic environments, which leads to natural questions as to how to provide incentives for the agent in a dynamic setting.Therefore, the design of optimality contract of employment deserves to be investigated.However, the analysis of the model rapidly becomes complex as the dynamic model includes hidden information and relevant state variables.In this paper, we illustrate that the analysis of models with hidden actions and hidden savings can be simplified by taking advantage of continuous time methods.Using the assumption of linear production and exponential utility, we study a continuous time contracting model with linear marginal productivity in which optimal contract can be solved in a closed form.In addition to the works of Holmstrom and Milgrom [1], the exponential-linear structure is used in Fudenberg et al. [2], Mitchell and Zhang [3], Cvitanic and Zhang [4], and Williams [5].The explicit solutions allow us to derive a simple implementable contract and illustrate the effects of information frictions.In particular, the model studied in this article is an extension of Holmstrom and Milgrom [1] and Williams [6].
We study three different information structures: full information, hidden actions, and hidden savings.As in Wang et al. [7], the principal can observe the agent's effort, consumption, and wealth in full information case.Thus the agent's participation is the only constraint which should be considered in the contract.We turn to the hidden action case, a classic moral hazard model in which the agent's effort cannot be observed by principal but the agent's consumption is also observable.As the principal cannot distinguish between adverse shock or low effort to output in hidden action case, the contract must provide incentives for the agent to put forth desired effort.As in Williams [5] and Cvitanic and Zhang [4], we derive our results by applying a stochastic maximum principle.Williams [8] developed the approach independently of Cvitanic and Zhang [4] and the work of Williams [5] builds on Williams [8].We rely on some results of Bismut [9] and take a change of variable into consideration as it is considered in Bismut [10].In this environment, the first-order approach is valid to design an optimal contract.That is, the incentive constraints can be characterized by the first-order conditions for agent's effort choice when the agent faces a given contract.In a static setting, Rogerson [11], Jewitt [12], and Mirrlees [13] give different conditions which insure the validity of the first-order approach.Facing a given contract, the agent participates and puts forth principal's desired effort, and thus the set of implementable contracts can be fully characterized.Implementable contracts are history-dependent where the first-order conditions are based on agent's promised utility under the contract.This form of history dependence starts with Abreu et al. [14,15] and is involved in many related literatures.Sannikov [16] and Meng et al. [17] studied the related continuous time models and Sannikov [18] gave an overview of the related literatures.Similar to Williams [6] and Su et al. [19], the assets and consumption payments occur continuously throughout the contract.The situation where the agent receives a single payment from principal is considered by Holmstrom and Milgrom [1], Schattler and Sung [20], and Cvitanic et al. [21].Although the case of dynamic payments is taken into account by Sannikov [16], there are no state variables other than promised utility.We then turn to the hidden saving case in which the agent is able to borrow or save in an account that the principal cannot monitor.In this environment, besides the agent's effort, the agent's consumption, and wealth cannot be monitored by principal.As in Williams [5], when the model includes a hidden state variable, we need an additional state variable which gives a brief statement of "shadow value" of the state to capture history dependence in the contract.In the case with hidden savings, the agent's current marginal utility of consumption characterizes the shadow value of the additional wealth.We apply a first-order approach to derive an optimal contract, which is similar to the approach of Werning [22], Abraham and Pavoni [23], and Pavan et al. [24] in discrete time dynamic moral hazard models.Garrett and Pavan [25] captures the dynamics of average distortions under optimal contract and deals directly with the full program by using an alternative approach of variational techniques.Unlike the hidden action case, with hidden savings, the validity of the first-order approach cannot be guaranteed by useful conditions.Therefore, we derive a candidate optimal contract by solving the agent's optimality problem which provides necessary conditions for optimal contracts, and then verify ex post that the contract is indeed implementable.Applying different methods, Mitchell and Zhang [3] and Edmans et al. [26] show that their contracts are incentive and compatible in hidden saving case.
Our contribution is that we find the explicit solutions of optimal contracts with linear marginal productivity in a fully dynamic environment.We work with a generalized model where the marginal productivity is linear and decreasing, while the marginal productivity of the model in Williams [6] is a constant.Similar to Holmstrom and Milgrom [1], the optimal contract is also linear in our completely dynamic setting with linear marginal productivity and exponential utility.Facing a given contract, the agent's optimal effort is a constant, but decreasing with the reduction of marginal productivity.However, the payment is linear in the logarithm of agent's promised utility and the effective rate of return under the contract.The principal's consumption is also proportional to the assets , the logarithm of promised utility and some time-dependent functions.After solving the optimal contracts, we make a comparison among three different information structures and study their implications.We also show the impact of information frictions and marginal productivity by analyzing the explicit results.
The rest of this paper proceeds as follows.In Section 2, we introduce a basic model including linear marginal productivity and provide a terminal condition which helps us to discuss problems from a finite to an infinite horizon.We introduce the implementability of contracts and show details of the change of variables.In Section 3, we consider the optimal contract with full information which serves as an efficient benchmark to compare with the private information models.Section 4 derives the solutions to the optimality contract with hidden actions by maximizing the agent's expected utility and the principal's expected utility.Using an additional state variable which is in fact redundant, the optimality contract with hidden savings is presented in Section 5.In Section 6, we compare three different cases and show the analytic results in figures.Section 7 gives a brief conclusion and finally we provide an Appendix which contains some details of the change of measure and proofs of our main results.

The Model
We consider a model where a principal hires an agent to manage a risky project and the principal's output affected by agent's effort choice.A related model in which output is i.i.d. and both the payments and the consumption occur only at the end of the contract was studied by Holmstrom and Milgrom [1].As in Williams [6], our model includes intermediate consumption by both the agent and principal.We also consider the situation of hidden savings, where the agent can borrow or save with a constant rate of return in a risk-free asset.Much of the difficulty of hidden savings comes from the interaction of incentive constraints and wealth effects.Unlike Williams [6], who deals with the problem under the environment of constant productivity, we work with the marginal productivity which varies linearly with effort.Since the size of economy (assets) will affect the marginal productivity, it is natural and realistic to characterize the marginal productivity by using nonlinear function of effort.Under the circumstance of nonlinear marginal productivity, the first conditions derived from Hamilton-Jacobi-Bellman (HJB) equations and Hamiltonian functions are high-order equations.Consequently, it is very difficult to find explicit solutions of optimal contracts, even when the marginal productivity is quadratic function of effort.For convenience of capturing more features of real productivity and obtaining explicit solutions, linear marginal productivity is a better choice.It should be pointed out that we obtain similar results as in Williams [6].We show that the agent's optimal effort choice changes with marginal productivity of effort.

The Model.
The environment in our model is a continuous time stochastic setting.Let (Ω, F, ) be an underlying probability space supporting related standard Brownian motion   .The evolution of information through time is represented by a filtration {F  }, which is generated by a Brownian motion   .We consider a finite horizon [0, ] and  may be arbitrarily large.In order to extend a finite to an infinite horizon, we let  → ∞.
At time 0, the principal contracts with the agent to manage a risky projects, whose cumulative output proceeds   evolve on [0, ] as follows: where   ∈  ⊂  is the agent's effort choice,  0 ,  1 ,  2 are constants and satisfy so that the marginal productivity is positive.In addition,  is the volatility that represents an additional shock due to the increment of Brownian motion to the output.The proceeds of output add to the principal's assets   , which earn a risk-free return , and out of which the principal withdraws his own dividend or consumption   ⊂  and pays the agent   ∈  ⊂ .Thus the process of principal's assets evolves as We do not restrict   ,   ,   to be nonnegative when working with exponential-linear models; thus their interpretations may be a bit flawed when they take negative values.We usually refer to   as output, since it captures the same information as   from the principal's point of view.In addition, the agent has his own wealth   , which earns the same rate of return , out of which the agent consumes   from his assets and receives income flows due to his payment   .Thus the agent's wealth evolves as At the terminal time , the agent gets a final payment   and chooses his consumption which depends on his terminal wealth   and the payment   .We will show more details about the terminal date below.Full information is self-explanatory and serves as a benchmark, while in the environment of hidden actions the principal cannot observe the agent's effort   or the shocks   but only observe the assets   .A classic moral hazard problem occurs, where the principal cannot distinguish low output caused by low effort or by a negative shock.Since the principal cannot examine if the agents are to deviate the desired effort ê , the principal should design the payment {  } to provide incentives for agent to achieve his effort target.In the environment of both full information and hidden actions, the principal has the information of agent's wealth   and equivalently has the information of consumption   .Since the allocation is only determined by total assets   +   , the agent's saving is redundant.As in Cole and Kocherlakota [27], without loss of generality, we assume that   ≡ 0 and the principal does all the saving and   =   .The final information type is hidden savings, where the principal cannot observe the agent's wealth   or consumption   for  > 0 but only get the information of the agent's initial wealth  0 .In this case, besides the desired effort ê the principal sets targets that ĉ =   and m ≡ 0, which are not observable.Thus the payment schemes {  } must provide incentives for agent to put forth desired effort and not to save or borrow.
As mentioned above, we concentrate on exponential preferences for both the agent and the principal, which allow us to get explicit solutions for optimality contracts.A more general method of finding optimal contracts typically requires numerical solutions.The agent has a flow exponential utility (, ) = −exp(−( −  2 /2)) over effort and consumption, where  is the coefficient of absolute risk aversion, and a terminal utility  over the final wealth and the terminal payment (5) where  represents the agent's discount rate and  denotes the expected utility over effort  and consumption .The principal has a flow exponential utility () = −exp(−) over his own consumption or dividend, where he has the same parameters  and  as the agents and has a terminal utility  over the terminal asset and the terminal payment: The assumption of common exponential utility implies risk aversion.

The Terminal Date.
In order to extend from a finite to an infinite horizon and to find explicit solutions, it is crucial to make some particular assumptions about what happens at the terminal date.At the terminal date , we particularly assume that the principal pays the agent   , keeping   −   for himself.Then from time , there is no more production, and both the agent and the principal live off their assets which only earn the same constant rate of return  for the infinite future.Thus we solve savings-consumption problems of the following form: where  0 is a given level of assets and   satisfies For the principal, we point out that  0 =   −   and   =   .
For the agent, we point out that  0 =   +   and   =   .
According to the principle of optimality in dynamic programming, HJB equation for the savings-consumption problem (7) evolves as Through simple calculations, we verify that the solution and optimal policy are Therefore, for the principal and the agent, we set the terminal utility function in the following form: 2.3.Contracts and Implementability.We now formally introduce contracts and explain the meaning of implementability.
Let  be the space of continuous functions mapping [0, ] into .It is convenient for us to let a bar over a variable, which implies an entire path on [0, ].Under hidden actions or hidden savings, we define the principal's observation path to be the time path of output  = {  :  ∈ [0, ]} which is a random element of .We define the filtration {Y  } to be the collection of -algebra generated by   at time .A contract must include a payment   ∈  and a set of recommended actions (ê  , ĉ ) ∈  for all , which are functions of the relevant history.In the environment of hidden savings, these recommended actions are truly adopted by the agent, while, under full information, we have ĉ =   =   and ê =   , and, under hidden actions, we have ĉ =   =   .The set of admissible contracts  is constructed by the set of Y predictable functions (, ê , ĉ ) : [0, ] ×  →  × .The payment and recommendations in the contract depend on the whole past history of the observation and the present time , but not on the future.Then in the case of a given contract, the agent will make his own choices of consumption and effort.Under full information, the agent's all actions are observed by principal and the agent has no choices to make, while under hidden savings the agent chooses effort and consumption and under hidden actions the agent chooses effort.Thus the agent's set of admissible controls A includes F  -predictable functions (, ) : [0, ] ×  → .
According to maximum principle results, we suppose that the set  can be written as the countable union of compact sets.If the agent accepts the contract at time 0 and always chooses the recommended actions: (ê, ĉ) = (, ) during the contract period, this contract (, ê, ĉ) is called implementable.Under full information, the contract is implementable if the participation holds, while, under hidden actions with ĉ = , those contracts satisfying ê =  are implementable.

A Change of Variables.
Under hidden actions or hidden savings, in order to derive the conditions of incentive compatibility, we should take the agent's decision problem into consideration when the agent faces a given contract.Generally, for any time , the agent's payment   = (, ) is a function of the entire past history of output.Because of history dependence, which means that the function of the entire past history  would be a state variable, we cannot directly solve the agent's problem.As in Bismut [10], for convenience, the density of output process is taken as the key state variable rather than the output process itself.Particularly, let  0  be a Brownian motion on , which represents the distribution of output resulting from an effort strategy and the output is a martingale.The distribution of output changes with the agent's different effort choices.Thus different probability measure over output indicates different effort choices that is made by the agent, and we take the relative density Γ  to be the key state variable.We will show details of the change of measure in Appendix A.1, where the density evolves as follows: with Γ 0 = 1.
In the environment of hidden states, the covariance between the unobservable and observable states plays a key role in the model.For convenience, we use the densityweighted wealth   = Γ    as the relevant unobservable state.After simple calculations from ( 4) and ( 12), we obtain its evolution: with  0 =  0 .Through changing variables from (  ,   ) to (Γ  ,   ), the stochastic differential equation (SDE) with random coefficients represents the state evolution.The coefficients of the transformed state evolution simply depend on  which is a fixed, but random, element of the probability space rather than the key state directly depending on the entire past history.That is, we cannot directly take   to be the state variable, when analyzing the agent's problem.Instead, we fixed an output path  and believe that it is true that the agent's effort choices affect the output .Changing variables in this way is useful for us to deal with the history, as the relevant state variable depending on the whole past history of output can be replaced with the current density Γ  .

The Full Information Problem
We first discuss the full information problem.As the agent's effort and consumption can be observed by principal, it becomes relatively easy to analyze the full information problem and the principal specifies their values about payment and effort directly in the contract.At time 0, the only requirement in designing the contract is to ensure the agent to participate, after which both the principal and the agent comply with the contract.We assume that the agent's outside reservation utility is  0 , and thus the participation constraint on the contract is (, ) ≥  0 .
As the participation constraint should be satisfied only at time 0, we obtain the optimal contract by using standard Lagrangian methods.However, as in Spear and Srivastava [28], we know that the contract should consider agent's promised utility under the situation of dynamic moral hazard.It is natural to introduce the promised utility, which plays a key role in the private information cases, as a way of imposing the participation constraint.Accordingly, we define the agent's promised utility   as the expected discounted utility remaining in the contract from time  forward: Using the martingale representation theorem, we obtain its evolution as follows: where   represents the sensitivity of the agent's promised utility to the external shocks, which is crucial for providing incentives under moral hazard.Since there is no initial value except a specified terminal condition, it is obvious that the promised utility is a backward stochastic differential equation (BSDE).In the full information case,   can be chosen freely by principal in the contract, as long as it is a solution of (15) which implies that participation constraint  0 ≥ V 0 will be satisfied.
In full information, in order to maximize the principal's utility (6), the optimal contract problem is to choose (  ,   ,   ,   ) for all  and the terminal payment   which is subject to the evolution of assets (3) and promised utility (15) and satisfies the participation constraint.We denote the principal's value function for maximizing utility (6) as (, , ), which captures the principal's expected discounted value when the principal's assets are   =  and the agent's promised utility is   = .Using the principle of optimality in dynamic programming, we obtain the HJB equation for 0 <  < : with the terminal condition (,   ,   ) = (  −   ) and   =   (  ).After calculations, we derive the following firstorder conditions for (, , , ): Combining the equations about conditions for (, ), we have Thus the optimal effort required by the principal in the contract is a constant, which is equal to his marginal productivity.This result implies the standard efficiency condition that the agent's marginal rate of substitution between consumption and effort equals the marginal productivity of effort.Furthermore, from the expression of effort, the effort is advanced with the growth of  1 and  2 .
Due to the advantages of exponential preferences and linear evolution, we obtain explicit solution for the optimal contract.Taking the terminal condition and the specification of terminal preferences (10), we have where  = (1/)exp(( − )/).Therefore we acquire The solution of ( 16) for any time  can be written as follows: for function  0 ().Substituting ( 20) into ( 16), we obtain the ODE as follows: where and  0 () =  2 .The solution of ( 21) is Therefore, the optimal policies are The principal and the agent share the production risk, as both of them are risk averse.Instead of the agent's consumption directly depending on output, it is linear in the logarithm of utility process and changes with different values of  1 and  2 .With different parameters  1 and  2 , the agent's optimal effort is different.The principal's consumption is proportional to the current output and the logarithm of agent's utility process.We also know that only the principal's consumption is a time-dependent function, which captures the finite horizon feature of the problem.
In order to facilitate comparison among different cases, we extend the contract from a finite to an infinite horizon.Under full information, we directly solve the infinite horizon.However, in the private information models, our results are only appropriate for finite horizon cases, so we show the limit of these solutions as  → ∞.Through calculations, we get lim →∞  0 () = exp(/) and one can verify that (, ) = (exp(/)/)exp(−) is the solution of the infinite horizon problem.The state variables for the infinite horizon evolve in the following form: The process of principal's assets follows an arithmetic Brownian motion with constant volatility and constant drift, while the agent's promised utility follows a geometric Brownian motion, which is the same as the results obtained in Williams [6].The expected growth rate of the agent's promised utility is equal to the difference between the discount rate  and the rate of return .

The Hidden Action Case
We consider private information model where the agent's effort   or shocks   is unobservable and the assets   can be observed by principal.We assume that the principal can observe the agent's consumption and wealth, and thus we set   =   and   ≡ 0. In order to make the agent put forth the desired amount of effort, the principal must provide incentives.When the agent faces a given history-dependent contract, we derive the agent's optimality conditions by solving the dynamic moral hazard problem, then we show that the agent's first-order conditions are sufficient to ensure implementability.Consequently, we have explicit solution for the optimal contract.

The Agent's Problem. It is crucial to see what effort
level the agent would choose, when the principal designs an incentive compatible contract.However, as discussed in Section 2.4, changing of variables which depend on the history of output plays a key role, so we take density process (12) to be the relevant state variable.Using   = (, ), the agent's preference can be written as follows: Using the measure   over output which is induced by agent's effort policy , we obtain the first equation which represents the expectation of discount utility, as discussed in Appendix A.1.Using the density process defined above, we have the second equation which implies that the state dependence variables inherited from contract are replaced with the relevant state variables.The agent's problem is to solve subject to (12) for a given .
After changing the variables, the agent's problem becomes one of control programs which has random coefficients.According to the method in Bismut [9], we apply the stochastic maximum principle to characterize the agent's optimality conditions as it is used in Williams [5].Similar to the deterministic Pontryagin maximum principle, we should define a Hamiltonian, then express optimality conditions by differentiating Hamiltonian, and derive adjoint or "costate" variables, when using the stochastic maximum principle.Since the state variable is stochastic, there is a pair of processes in the adjoint variable, one of which multiplies the diffusion of the state and another the drift.The pair of adjoint process solves a BSDE.With state Γ  and adjoint (  ,   ), we define the agent's Hamiltonian H as Since there is no drift in Γ, the Hamiltonian H does not include the level of the adjoint .Furthermore, since Γ  > 0, instead of the Hamiltonian H, we define the conditions on the reduced Hamiltonian .
Similar to the deterministic optimal control theory, the task of optimal control is to maximize the Hamiltonian and the differentials of Hamiltonian govern the evolution of the adjoint or costate variables.Particularly, the drift of the costate consists of −/Γ and a term which reflects the discounting.Then we add a diffusion term and the evolution of the adjoint variable can be written as Here through the change of measure, we carry out the second equation, which shows that the costate associated with the relative density Γ  is promised utility   in (15).In the hidden action case, using the relative density Γ  is not just for convenience as in the full information case.Here it becomes an element of agent's optimality conditions, which captures the features of the change of shadow value in the likelihood of different output processes.
In the following, we require a process Then we give necessary conditions for optimality.We show the details of proofs of all results in Appendix A.2. Proposition 1.Let ( * , Γ * ) be an optimal control-state pair.Then there exists an F  -adapted process (  ,   ) in  2 that satisfies (29) Although the results are similar to that of Williams [5], our reduced Hamiltonian  is different from that of Williams [6].Since the result is only necessary conditions for the agent's problem, the set of contracts characterized by the agent's first-order conditions alone may be larger than that of implementable contracts.We will show that the first-order approach is accessible to define the optimal contract in the next subsection.

Implementable Contracts.
In order to induce the agent to provide an interior target effort ê, we build in the incentive constraints by using the first-order condition (31), which can be reduced to an equality at ê: )) . (32) Using (32), we define the target volatility γ , which can be represented by the target effort and the consumption.As we know γ > 0, so that promised utility is advanced with a positive shock which means that the Brownian motion   has a positive increment.
The contract is called locally incentive compatible if it satisfies the first-order condition (32).We now characterize the set of implementable contracts.When an agent faces a contract with consumption ĉ, target effort ê, and associated volatility γ , the condition (32) ensures that the agent's optimal response is to provide the target effort ê which implies  (  , ê , ĉ ,   ,   ) = max  ∈ (  ,   , ĉ ,   ,   ) . (33) In hidden action case, the reduced Hamiltonian  is concave with respect to , so the condition (32) is necessary and sufficient for local optimality of the contract.We define a contract as promise-keeping if it implies a solution to (29).Since the terminal condition may not be satisfied, not all contracts satisfy this condition.As mentioned in full information case, if this solution has  0 ≥  0 , we may say that the contract satisfies participation constraint.So the set of implementable contracts can be characterized by these conditions as in next result.

Proposition 2. A contract (ĉ, ê) ∈ 𝑆 is implementable in the hidden action case if and only if it satisfies (i) the participation, (ii) promise-keeping, and (iii) locally incentive compatible constraints.
The proof of Proposition 2 is in Appendix A.2.As discussed above, we establish the validity of first-order approach.Therefore, in the setting of this paper, the global optimality conditions are equivalent to the local first-order condition.

The Optimal Contract.
We now show how to get an explicit solution of a contract, which implies the principal's choice in the contract.As discussed above, we define the value function as (, , ).We know that the contract provided by principal should satisfy participation, promise-keeping, and incentive constraints.Instead of choosing the volatility variable  freely in full information case, here the volatility variable (, ) is a function of consumption and effort.The principal's HJB equation for 0 <  <  can be written as )) Under the exponential-linear environment, with hidden actions the value functions and the optimal policies in the contract have the same form as with full information if we ignore different constants.Particularly, the same form of value function is for some function  1 () with respect to .After some calculations, the optimal policies are where both  * and  are constants.Using the fact that ( ℎ (),  * ) =  and substituting (36) into the first-order conditions for (, ), we see that  * and  satisfy Solving (38), we have Consequently, we give an implicit expression for  by substituting (39) into the first equation in (38).Substituting the optimal policies into (34), we obtain the function  1 () which satisfies an ODE of the same form as in full information case: where and the terminal condition is  1 () =  2 .The solution of ( 40) is While the policy function both in the full information case andi n hidden action case has the same form, their constants are different.Through a simple numerical task, the values of the constants can be solved but we do not have access to get explicit analytic expressions.When  = 0, the policies in hidden action case degenerate into the full information solution with  * =  1 /(1 − 2 2 ) and  = , which is obvious as there is no private information when the output is not affected by shocks.For small , we obtain  * ≈  1 /(1 − 2 2 ) and  ≈ .Therefore, we have that  ℎ = − = 2  , which implies that the agent's utility in the case of moral hazard is more responsive to incoming information than that under full information.
In order to have a good understanding of optimal contract, we expand  * and  when  2 is near to zero.According to (38) and (39), we have the following approximations: Substituting into  ℎ (), we have So the information frictions reduce the amount of effort provided by the agent, but almost no change in consumption.
In the contract,  is the agent's effective rate of return, which can be regarded as an after-tax return on savings.From the approximation of , this return decreases with the growth of volatility.Furthermore, effort varies with the parameters in a simple way: a greater rate of risk aversion parameter  or return parameter  or smaller parameters  2 and parameter  1 leads to larger reductions in effort.Below we show that the exact solutions for the model with specific parameters are in accordance with these approximations.Therefore, the information friction has little effect on consumption but leads to a reduction in effort.
As discussed in full information, these results are only appropriate for finite horizon cases.We know that the limit of the finite horizon solution is well defined.As  → ∞, we have lim →∞  1 () = exp(/).In the infinite horizon limit, the evolution of state variables can be written as Compared with full information, the expected growth rate of promised utility in moral hazard grows larger, as  <  for small  2 , and the volatility of promised utility also grows larger as  ℎ >   .

The Hidden Savings Case
We now consider the case that the principal cannot monitor the account in which the agent is able to save and borrow.
As discussed above, in hidden actions, the agent's wealth   satisfies (4) but the principal can no longer guarantee that the agent's consumption is equal to his payment.In order to eliminate savings and achieve the targets m ≡ 0 and ĉ = , the principal must design a contract which makes the agent have no inventive to save.Similar to the case of hidden action, we derive the agent's optimality conditions when he faces a given history-dependent contract.However in the hidden saving case, the set of implementable contracts can no longer be characterized by agent's optimality conditions.Using the necessary optimality conditions, we get a candidate optimal contract.Then we demonstrate that the contract is truly incentive compatible and implementable.The same approach is used in Farhi and Werning [29].Generally, designing the contract needs an additional endogenous state variable to capture the shadow value (in terms of the agent's marginal utility) of the hidden state.Below we show that the additional variable is redundant in our setting.

The Agent's Problem.
Similar to the hidden actions, we derive the agent's first-order optimality conditions.The agent should choose effort as well as his consumption.Thus the problem is to solve sup subject to ( 12) and ( 13) for a given .According to stochastic maximum principle, the additional state variable requires an additional adjoint denoted by (  ,   ).On the basis of states (Γ  ,   ) and adjoints (  ,   ) and (  ,   ), we define the agent's Hamiltonian as H = Γ with  (, , , , , , , , ) where we use (13).We obtain the evolution of costates by differentiating the Hamiltonian.Therefore, the promised utility   with terminal condition   = (  ,   ) follows ( 15), and its sensitivity is   .The sensitivity of costate variable   which is associated with the wealth   is   ; thus the evolution of adjoint (  ,   ) can be written by As discussed in the hidden action case, (48) is derived by differentiating H with respect to  and changing measure.
Using the stochastic maximum principle, we get necessary conditions for agent's optimal choice.Proposition 3. Let ( * ,  * , Γ * ,  * ) be an optimal control-state pair.Then there exist F  -adapted processes (  ,   ) and (  ,   ) in  2 that satisfy (15) and ( 48) with (, ) = ( * ,  * ).Moreover, for almost every  ∈ [0, ], the optimal control ( ( Assume that  is convex; then, for all (, ) ∈ , a pair of optimal controls ( * ,  * ) satisfies We provide the proof of Proposition 3 in Appendix A.2. Differentiating the reduced Hamiltonian, we obtain the firstorder conditions: It is obvious that   =   (  ,   ), the agent's marginal utility of consumption, which means that the agent could increase his current marginal consumption as a little wealth changes.The solution of (48) can be written in the following form: According to the law of iterated expectations, for  > , the standard consumption Euler equation holds Thus the agent's optimality conditions can be expressed as the expectation of marginal utility of consumption, and the volatility of marginal utility   is affected by the contract.

Necessary Conditions for Implementability.
In order to achieve the target policy (ê, ĉ) and m ≡ 0, both marginal utility   and promised utility   are crucial for an implementable contract.In our setting, since the utility and the marginal utility are exponential preferences, the additional costate variable is redundant.We will verify this condition below.At terminal time , we have So   is proportional to   .Combining   =   (  ,   ) = −(  ,   ) and (52), we obtain Therefore,   = −  for all  ∈ [0, ] and the additional costate variable   is redundant, which implies that   provides no more information than that of promised utility   .Substituting  = 0 and  = − into the first-order conditions, we get As in the hidden action case, a contract is called locally incentive compatibility if these conditions are satisfied.In other words, the target control would satisfy the agent's optimality conditions if the agents were to have no wealth (  = 0).However, when the agent chooses a different amount of wealth and different actions, the contract may not be full incentive compatibility.Thus we should rule these cases out.As discussed in Williams [8], the concavity of Hamiltonian in (, , ) is a sufficient condition for implementability.This is similar to the sufficiency condition for maximum principle in Zhou [30].Unfortunately, it is difficult to verify this concavity assumption in our model.Instead of establishing the implementable contract directly, we find a candidate optimal contract by using necessary conditions for implementability and then verify that the contract is indeed incentive compatible.
We give the proof of Proposition 4 in Appendix A.2.

The Optimal
where the terminal condition is (,   ,   ) = (  −   ) and   =   (  ).Thus the evolutions of first-order conditions for (, ) are Ignoring different constants, the optimal policies and value function have the same form as those of the previous cases.
In fact, the form of value function is with some function  2 ().The optimal policies are where ě is a constant.The agent's optimality conditions determine the policies  ℎ and  ℎ when  = ě .Substituting (59) into (57), we obtain an ODE for  2 () where and the terminal condition is  2 () =  2 .The solution of ( 61) is Similar to the optimal effort choice  * () in the hidden action case, we have All of these results in the hidden saving case are in agreement with those in the hidden action case if  = .When the agent has access to assets which cannot be monitored by principal and yield the risk-free return , it is impossible for principal to make the effective return  ̸ =  by distorting the allocation intertemporally.Thus the principal's ability to provide intertemporal incentives is limited by hidden savings.
In the infinite horizon limit, the evolutions of state variable can be written by These evolutions of output and promised utility under hidden savings are the same as those of hidden actions if  = .Since the principal is not able to affect the agent's intertemporal incentives in the hidden saving case, the expected growth rate of promised utility under hidden savings is smaller than that of hidden actions and is the same as that of full information.

Verifying Incentive Compatible.
Using the agent's necessary optimality conditions, we obtain a candidate optimal contract which may not be incentive compatible.In order to show that the contract is indeed implementable, we now explicitly solve the agent's problem when he faces a given contract.
Under the contract, the agent's payment   =  ℎ (  ) depends on promised utility   .For the agent,   evolves as with   = (  , 0), where Brownian motion  ě  represents the principal's information set under the optimal contract,   is the real effort provided by agent, and ě is the optimal effort included in optimality contract.The expected growth rate of promised utility increases if we let   < ě , and this result can be interpreted by a negative shock.According to the terminal condition   = (  , 0), we know that the terminal payment   is the inverse function of   .Without loss of generality, we assume that the agent's initial wealth is  0 = 0, as initial wealth is observable and can be taxed away by principal.The evolution of wealth can be written by As discussed above, we define the agent's value function as (, , ).Then the HJB equation is  We can verify that (, , ) =  exp(−) is the agent's value function for all .Substituting (, , ) into (68), we obtain first-order conditions for (, ): Through simple calculations, these two first conditions give  = ě , which implies that the target effort level is achieved.The incentive problems induced by the deviation of effort and hidden savings are of no existence here, as effort is independent of .The optimal condition for  is  = ě Combining (67) and (71) gives   = 0.Under the assumption of  0 = 0, the agent always keeps his wealth level at zero and consumes the optimal amount  =  ℎ () which is given by the contract.Therefore the candidate optimal contract is in fact implementable.

Comparing the Different Cases
Table 1 gives an outline of optimal policies of the contracts under full information, hidden actions, and hidden savings.
In terms of forms of the policy functions, we find that they are similar to each other.In each case, the agent's effort is constant, the agent's consumption is proportional to the log(−), and the principal's dividend is proportional to the log(−) and the current assets .Both of their consumption functions are also negatively correlated with the agent's effective rate of return on assets.The effective rate of return is  under full information and hidden savings and the effective rate of return is  under hidden actions.Compared with full information, the main effect under hidden actions is the reduction of the agent's effort.The effective rate of return falls to  and  < , which offsets at least partially the effects of reduction of effort, while the effort falls.Thus there is almost no effect on consumption.The agent's effective rate of return on savings leads to the main difference between the hidden action and hidden saving cases.The optimal policies in the hidden action case would coincide with those in the hidden saving case if the effective return on savings is .But under hidden savings, it is limited for principal to provide intertemporal incentives.According to the derivative of effort with respect to , the effort is decreasing in  if  <  < 4.
As discussed in the hidden action case, it will certainly hold for small shocks that  is just smaller than .Since both  * () and  ℎ are reduced with the growth of  and  < , the agent provides less effort and has less consumption in hidden saving case than that of hidden actions.
In the full information, hidden action, and hidden saving cases, we set  0 = 0,  1 = 0.5,  2 = −0.1, = 0.15,  = 0.1,  = 2,  = −1, and then the functions for effort and consumption are shown in Figure 1.As we have seen, the left diagram plots effort functions for varying , while the right diagram plots consumption functions for varying .Since the information friction vanishes as  → 0, it is obvious that the results of all cases will coincide with each other.Compared with full information, the effort in both hidden action and hidden saving cases falls more rapidly with the growth of .Moreover, for small , the consumption of hidden savings falls sharply but that of hidden actions is relatively unaffected.However, both effort and consumption with hidden savings fall further more than that with hidden actions.As we have already seen, these efforts are all monotone in .
Since the ability to manage risk assets for different agents is usually different, it becomes meaningful for principal to consider the effect of marginal productivity.Particularly, in our model the marginal productivity is described by parameter values of  1 and  2 .Figure 2 shows that the functions for effort in the full information, hidden action, and hidden savings cases change with  1 and  2 .In the left panel, we set  2 = −0.1, = 1, and plot effort versus  1 .In the right panel, we set  1 = 0.5,  = 1, and plot effort versus  2 .As we have seen, the agent's effort increases with the growth of both  1 and  2 in all cases.The effort with full information is linear in  1 and with hidden actions and hidden savings is positively correlated with  1 but not linear.Since the size of the gap of effort between each case is affected by the shock , the difference between two cases keeps the same in the right panel, which is not affected by  2 .Similar to the previous results, the effort with hidden actions is larger than that with hidden savings but is smaller than that with full information.If the shock  = 0, the efforts for particular parameterizations in Figure 2 would coincide.
Finally, in order to measure the cost of information frictions, we show reductions in the principal's consumption.In three different information structure models, all dividends are affected by some functions of time .That is, for each   fixed level of assets  and promised utility , at each date the difference of principal's consumption between two different informational assumptions is a constant amount depending on the information structure.Particularly, in the infinite horizon limit, the reductions of principal's consumption in hidden action and hidden saving cases, which is relative to full information case, are shown in Figure 3.With  = 0 and  = −1, the largest cost of information frictions is approximately equal to a relatively low rate of 2.8% under hidden savings and of 2.4% under hidden actions.Since the agent's marginal productivity falls and so does the agent's effort, the cost in our setting is smaller than that in Williams [30].Of course with lower levels of promised utility  or greater output , the proportional reduction of principal's dividend is much lower.

Conclusion
In this paper, we study how to find the explicit solutions for optimal contracts under full information, hidden actions, and hidden savings in a dynamic principal-agent model.In a continuous time setting, we can expediently apply several powerful results in stochastic control.In addition, under the assumptions of exponential utility and linear production, we solve explicitly optimal contracts and give their expressions.
We show that in a fully dynamic setting with exponential utility the optimal contract is linear.The optimality efforts in all cases are constant amount but decrease with the reduction of marginal productivity.However, the agent's payment or consumption is proportional to the logarithm of promised utility and the effective rate of return under the optimal contract.The principal's consumption is also proportional to the assets , the logarithm of promised utility, and timedependent functions.Moreover, we show that the main effect of hidden actions is the reduction of effort, but a smaller effect on consumption since a slight effect on the agent's implicit rate of return occurs in this case.In hidden savings, the reductions of both effort and consumption are much more than those in hidden actions.
In real economic activities, it is reasonable and practical to use nonlinear relation to describe the economic features.A natural extension of the current model is to consider nonlinear productivity and studying nonlinear relation is our future work.

Figure 1 :
Figure 1: The agent's effort and consumption policies for different shock level .

Figure 2 :
Figure 2: The agent's optimal effort for different  1 and  2 .

Figure 3 :
Figure 3: Reduction in the principal's dividend relative to the full information case for different shock levels .

Table 1 :
Comparison of the policies for the optimal contracts under full information, hidden actions, and hidden savings.