Question
Suppose $X \sim \mathcal{N}(\mu_x, \sigma_x^2)$ and $Y \sim \mathcal{N}(\mu_y, \sigma_y^2)$ are jointly Gaussian with correlation $\rho$. Compute $$ \mathbb{E}[e^Y \mid X]. $$
Solution

The key step is to identify the conditional distribution of $Y$ given $X$.
For jointly Gaussian $(X,Y)$, the conditional distribution $Y \mid X$ is Gaussian.

We first compute the conditional mean by linear regression of $Y$ on $X$: $$ \mathbb{E}[Y \mid X] = aX + b, $$ where $$ a = \frac{\mathrm{Cov}(X,Y)}{\mathrm{Var}(X)}, \qquad b = \mathbb{E}[Y] - a,\mathbb{E}[X]. $$ Since $\mathrm{Cov}(X,Y) = \rho \sigma_x \sigma_y$ and $\mathrm{Var}(X) = \sigma_x^2$, we obtain $$ \mathbb{E}[Y \mid X] = \frac{\rho \sigma_x \sigma_y}{\sigma_x^2} X + \mu_y - \frac{\rho \sigma_x \sigma_y}{\sigma_x^2} \mu_x = \mu_y + \rho \frac{\sigma_y}{\sigma_x}(X - \mu_x). $$

Next we use the law of total variance: $$ \mathrm{Var}(Y) = \mathbb{E}[\mathrm{Var}(Y \mid X)] + \mathrm{Var}(\mathbb{E}[Y \mid X]). $$

Since $\mathbb{E}[Y \mid X]$ is linear in $X$, the second term is $$ \mathrm{Var}(\mathbb{E}[Y \mid X]) = \rho^2 \frac{\sigma_y^2}{\sigma_x^2} \mathrm{Var}(X) = \rho^2 \sigma_y^2. $$

Thus $$ \sigma_y^2 = \mathrm{Var}(Y) = \mathrm{Var}(Y \mid X) + \rho^2 \sigma_y^2, $$ so $$ \mathrm{Var}(Y \mid X) = (1 - \rho^2)\sigma_y^2. $$

Hence $$ Y \mid X = x \sim \mathcal{N}\Big( \mu_y + \rho \frac{\sigma_y}{\sigma_x}(x - \mu_x), (1 - \rho^2)\sigma_y^2 \Big). $$

Now use the moment generating function of a Gaussian.
If $Z \sim \mathcal{N}(\mu, \sigma^2)$, then $$ \mathbb{E}[e^{tZ}] = \exp\Big(\mu t + \tfrac12 \sigma^2 t^2\Big). $$

Conditionally on $X$, we therefore have $$ \mathbb{E}[e^{tY} \mid X] = \exp\Big( \mu_{Y \mid X} t + \tfrac12 \sigma_{Y \mid X}^2 t^2 \Big), $$ where $$ \mu_{Y \mid X} = \mu_y + \rho \frac{\sigma_y}{\sigma_x}(X - \mu_x), \qquad \sigma_{Y \mid X}^2 = (1 - \rho^2)\sigma_y^2. $$

Setting $t = 1$ gives $$ \mathbb{E}[e^{Y} \mid X] = \exp\Big( \mu_y + \rho \frac{\sigma_y}{\sigma_x}(X - \mu_x) + \tfrac12 (1 - \rho^2)\sigma_y^2 \Big). $$

Remark.
The linear regression representation $\mathbb{E}[Y \mid X] = aX + b$ comes from the fact that $\mathbb{E}[Y \mid X]$ is the minimizer of $$ \min_f ; \mathbb{E}\big[(Y - f(X))^2\big], $$ and, for jointly Gaussian $(X,Y)$, the optimal predictor is linear in $X$.