Skip to content

Commit 85eeec0

Browse files
Jonathan Hseutensorflower-gardener
authored andcommitted
Automated rollback of change 139400135
Change: 139632235
1 parent 068e239 commit 85eeec0

160 files changed

Lines changed: 1599 additions & 1603 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

tensorflow/contrib/bayesflow/python/ops/entropy.py

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -136,15 +136,15 @@ def elbo_ratio(log_p,
136136
<= Log[p(x)]
137137
```
138138
139-
User supplies either `Output` of samples `z`, or number of samples to draw `n`
139+
User supplies either `Tensor` of samples `z`, or number of samples to draw `n`
140140
141141
Args:
142-
log_p: Callable mapping samples from `q` to `Output`s with
142+
log_p: Callable mapping samples from `q` to `Tensors` with
143143
shape broadcastable to `q.batch_shape`.
144144
For example, `log_p` works "just like" `q.log_prob`.
145145
q: `tf.contrib.distributions.Distribution`.
146-
z: `Output` of samples from `q`, produced by `q.sample_n`.
147-
n: Integer `Output`. Number of samples to generate if `z` is not provided.
146+
z: `Tensor` of samples from `q`, produced by `q.sample_n`.
147+
n: Integer `Tensor`. Number of samples to generate if `z` is not provided.
148148
seed: Python integer to seed the random number generator.
149149
form: Either `ELBOForms.analytic_entropy` (use formula for entropy of `q`)
150150
or `ELBOForms.sample` (sample estimate of entropy), or `ELBOForms.default`
@@ -153,7 +153,7 @@ def elbo_ratio(log_p,
153153
name: A name to give this `Op`.
154154
155155
Returns:
156-
Scalar `Output` holding sample mean KL divergence. `shape` is the batch
156+
Scalar `Tensor` holding sample mean KL divergence. `shape` is the batch
157157
shape of `q`, and `dtype` is the same as `q`.
158158
159159
Raises:
@@ -189,12 +189,12 @@ def entropy_shannon(p,
189189
= Entropy[p]
190190
```
191191
192-
User supplies either `Output` of samples `z`, or number of samples to draw `n`
192+
User supplies either `Tensor` of samples `z`, or number of samples to draw `n`
193193
194194
Args:
195195
p: `tf.contrib.distributions.Distribution`
196-
z: `Output` of samples from `p`, produced by `p.sample_n(n)` for some `n`.
197-
n: Integer `Output`. Number of samples to generate if `z` is not provided.
196+
z: `Tensor` of samples from `p`, produced by `p.sample_n(n)` for some `n`.
197+
n: Integer `Tensor`. Number of samples to generate if `z` is not provided.
198198
seed: Python integer to seed the random number generator.
199199
form: Either `ELBOForms.analytic_entropy` (use formula for entropy of `q`)
200200
or `ELBOForms.sample` (sample estimate of entropy), or `ELBOForms.default`
@@ -203,7 +203,7 @@ def entropy_shannon(p,
203203
name: A name to give this `Op`.
204204
205205
Returns:
206-
An `Output` with same `dtype` as `p`, and shape equal to `p.batch_shape`.
206+
A `Tensor` with same `dtype` as `p`, and shape equal to `p.batch_shape`.
207207
208208
Raises:
209209
ValueError: If `form` not handled by this function.
@@ -316,24 +316,24 @@ def renyi_ratio(log_p, q, alpha, z=None, n=None, seed=None, name='renyi_ratio'):
316316
317317
#### Call signature
318318
319-
User supplies either `Output` of samples `z`, or number of samples to draw `n`
319+
User supplies either `Tensor` of samples `z`, or number of samples to draw `n`
320320
321321
Args:
322-
log_p: Callable mapping samples from `q` to `Output`s with
322+
log_p: Callable mapping samples from `q` to `Tensors` with
323323
shape broadcastable to `q.batch_shape`.
324324
For example, `log_p` works "just like" `q.log_prob`.
325325
q: `tf.contrib.distributions.Distribution`.
326326
`float64` `dtype` recommended.
327327
`log_p` and `q` should be supported on the same set.
328-
alpha: `Output` with shape `q.batch_shape` and values not equal to 1.
329-
z: `Output` of samples from `q`, produced by `q.sample_n`.
330-
n: Integer `Output`. The number of samples to use if `z` is not provided.
328+
alpha: `Tensor` with shape `q.batch_shape` and values not equal to 1.
329+
z: `Tensor` of samples from `q`, produced by `q.sample_n`.
330+
n: Integer `Tensor`. The number of samples to use if `z` is not provided.
331331
Note that this can be highly biased for small `n`, see docstring.
332332
seed: Python integer to seed the random number generator.
333333
name: A name to give this `Op`.
334334
335335
Returns:
336-
renyi_result: The scaled log of sample mean. `Output` with `shape` equal
336+
renyi_result: The scaled log of sample mean. `Tensor` with `shape` equal
337337
to batch shape of `q`, and `dtype` = `q.dtype`.
338338
"""
339339
with ops.name_scope(name, values=[alpha, n, z]):
@@ -362,7 +362,7 @@ def renyi_alpha(step,
362362
alpha_min,
363363
alpha_max=0.99999,
364364
name='renyi_alpha'):
365-
r"""Exponentially decaying `Output` appropriate for Renyi ratios.
365+
r"""Exponentially decaying `Tensor` appropriate for Renyi ratios.
366366
367367
When minimizing the Renyi divergence for `0 <= alpha < 1` (or maximizing the
368368
Renyi equivalent of elbo) in high dimensions, it is not uncommon to experience
@@ -382,17 +382,17 @@ def renyi_alpha(step,
382382
```
383383
384384
Args:
385-
step: Non-negative scalar `Output`. Typically the global step or an
385+
step: Non-negative scalar `Tensor`. Typically the global step or an
386386
offset version thereof.
387-
decay_time: Positive scalar `Output`.
388-
alpha_min: `float` or `double` `Output`.
387+
decay_time: Positive scalar `Tensor`.
388+
alpha_min: `float` or `double` `Tensor`.
389389
The minimal, final value of `alpha`, achieved when `step >= decay_time`
390-
alpha_max: `Output` of same `dtype` as `alpha_min`.
390+
alpha_max: `Tensor` of same `dtype` as `alpha_min`.
391391
The maximal, beginning value of `alpha`, achieved when `step == 0`
392392
name: A name to give this `Op`.
393393
394394
Returns:
395-
alpha: An `Output` of same `dtype` as `alpha_min`.
395+
alpha: A `Tensor` of same `dtype` as `alpha_min`.
396396
"""
397397
with ops.name_scope(name, values=[step, decay_time, alpha_min, alpha_max]):
398398
alpha_min = ops.convert_to_tensor(alpha_min, name='alpha_min')

tensorflow/contrib/bayesflow/python/ops/monte_carlo.py

Lines changed: 19 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -105,26 +105,26 @@ def expectation_importance_sampler(f,
105105
If `f >= 0`, it is up to 2x more efficient to exponentiate the result of
106106
`expectation_importance_sampler_logspace` applied to `Log[f]`.
107107
108-
User supplies either `Output` of samples `z`, or number of samples to draw `n`
108+
User supplies either `Tensor` of samples `z`, or number of samples to draw `n`
109109
110110
Args:
111-
f: Callable mapping samples from `sampling_dist_q` to `Output`s with shape
111+
f: Callable mapping samples from `sampling_dist_q` to `Tensors` with shape
112112
broadcastable to `q.batch_shape`.
113113
For example, `f` works "just like" `q.log_prob`.
114-
log_p: Callable mapping samples from `sampling_dist_q` to `Output`s with
114+
log_p: Callable mapping samples from `sampling_dist_q` to `Tensors` with
115115
shape broadcastable to `q.batch_shape`.
116116
For example, `log_p` works "just like" `sampling_dist_q.log_prob`.
117117
sampling_dist_q: The sampling distribution.
118118
`tf.contrib.distributions.Distribution`.
119119
`float64` `dtype` recommended.
120120
`log_p` and `q` should be supported on the same set.
121-
z: `Output` of samples from `q`, produced by `q.sample_n`.
122-
n: Integer `Output`. Number of samples to generate if `z` is not provided.
121+
z: `Tensor` of samples from `q`, produced by `q.sample_n`.
122+
n: Integer `Tensor`. Number of samples to generate if `z` is not provided.
123123
seed: Python integer to seed the random number generator.
124124
name: A name to give this `Op`.
125125
126126
Returns:
127-
The importance sampling estimate. `Output` with `shape` equal
127+
The importance sampling estimate. `Tensor` with `shape` equal
128128
to batch shape of `q`, and `dtype` = `q.dtype`.
129129
"""
130130
q = sampling_dist_q
@@ -182,26 +182,26 @@ def expectation_importance_sampler_logspace(
182182
log-space.
183183
184184
185-
User supplies either `Output` of samples `z`, or number of samples to draw `n`
185+
User supplies either `Tensor` of samples `z`, or number of samples to draw `n`
186186
187187
Args:
188-
log_f: Callable mapping samples from `sampling_dist_q` to `Output`s with
188+
log_f: Callable mapping samples from `sampling_dist_q` to `Tensors` with
189189
shape broadcastable to `q.batch_shape`.
190190
For example, `log_f` works "just like" `sampling_dist_q.log_prob`.
191-
log_p: Callable mapping samples from `sampling_dist_q` to `Output`s with
191+
log_p: Callable mapping samples from `sampling_dist_q` to `Tensors` with
192192
shape broadcastable to `q.batch_shape`.
193193
For example, `log_p` works "just like" `q.log_prob`.
194194
sampling_dist_q: The sampling distribution.
195195
`tf.contrib.distributions.Distribution`.
196196
`float64` `dtype` recommended.
197197
`log_p` and `q` should be supported on the same set.
198-
z: `Output` of samples from `q`, produced by `q.sample_n`.
199-
n: Integer `Output`. Number of samples to generate if `z` is not provided.
198+
z: `Tensor` of samples from `q`, produced by `q.sample_n`.
199+
n: Integer `Tensor`. Number of samples to generate if `z` is not provided.
200200
seed: Python integer to seed the random number generator.
201201
name: A name to give this `Op`.
202202
203203
Returns:
204-
Logarithm of the importance sampling estimate. `Output` with `shape` equal
204+
Logarithm of the importance sampling estimate. `Tensor` with `shape` equal
205205
to batch shape of `q`, and `dtype` = `q.dtype`.
206206
"""
207207
q = sampling_dist_q
@@ -215,10 +215,10 @@ def _logspace_mean(log_values):
215215
"""Evaluate `Log[E[values]]` in a stable manner.
216216
217217
Args:
218-
log_values: `Output` holding `Log[values]`.
218+
log_values: `Tensor` holding `Log[values]`.
219219
220220
Returns:
221-
`Output` of same `dtype` as `log_values`, reduced across dim 0.
221+
`Tensor` of same `dtype` as `log_values`, reduced across dim 0.
222222
`Log[Mean[values]]`.
223223
"""
224224
# center = Max[Log[values]], with stop-gradient
@@ -249,18 +249,18 @@ def expectation(f, p, z=None, n=None, seed=None, name='expectation'):
249249
\approx E_p[f(Z)]
250250
```
251251
252-
User supplies either `Output` of samples `z`, or number of samples to draw `n`
252+
User supplies either `Tensor` of samples `z`, or number of samples to draw `n`
253253
254254
Args:
255-
f: Callable mapping samples from `p` to `Output`s.
255+
f: Callable mapping samples from `p` to `Tensors`.
256256
p: `tf.contrib.distributions.Distribution`.
257-
z: `Output` of samples from `p`, produced by `p.sample_n`.
258-
n: Integer `Output`. Number of samples to generate if `z` is not provided.
257+
z: `Tensor` of samples from `p`, produced by `p.sample_n`.
258+
n: Integer `Tensor`. Number of samples to generate if `z` is not provided.
259259
seed: Python integer to seed the random number generator.
260260
name: A name to give this `Op`.
261261
262262
Returns:
263-
An `Output` with the same `dtype` as `p`.
263+
A `Tensor` with the same `dtype` as `p`.
264264
265265
Example:
266266

tensorflow/contrib/bayesflow/python/ops/special_math.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -65,11 +65,11 @@ def ndtr(x, name="ndtr"):
6565
```
6666
6767
Args:
68-
x: `Output` of type `float32`, `float64`.
68+
x: `Tensor` of type `float32`, `float64`.
6969
name: Python string. A name for the operation (default="ndtr").
7070
7171
Returns:
72-
ndtr: `Output` with `dtype=x.dtype`.
72+
ndtr: `Tensor` with `dtype=x.dtype`.
7373
7474
Raises:
7575
TypeError: if `x` is not floating-type.
@@ -135,13 +135,13 @@ def log_ndtr(x, series_order=3, name="log_ndtr"):
135135
136136
137137
Args:
138-
x: `Output` of type `float32`, `float64`.
138+
x: `Tensor` of type `float32`, `float64`.
139139
series_order: Positive Python `integer`. Maximum depth to
140140
evaluate the asymptotic expansion. This is the `N` above.
141141
name: Python string. A name for the operation (default="log_ndtr").
142142
143143
Returns:
144-
log_ndtr: `Output` with `dtype=x.dtype`.
144+
log_ndtr: `Tensor` with `dtype=x.dtype`.
145145
146146
Raises:
147147
TypeError: if `x.dtype` is not handled.

tensorflow/contrib/bayesflow/python/ops/stochastic_gradient_estimators.py

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -75,13 +75,13 @@ def score_function(stochastic_tensor, value, loss, baseline=None,
7575
7676
Args:
7777
stochastic_tensor: `StochasticTensor` p(x).
78-
value: `Output` x. Samples from p(x).
79-
loss: `Output`.
80-
baseline: `Output` broadcastable to `loss`.
78+
value: `Tensor` x. Samples from p(x).
79+
loss: `Tensor`.
80+
baseline: `Tensor` broadcastable to `loss`.
8181
name: name to prepend ops with.
8282
8383
Returns:
84-
`Output` `p.log_prob(x) * (loss - b)`. Taking the gradient yields the score
84+
`Tensor` `p.log_prob(x) * (loss - b)`. Taking the gradient yields the score
8585
function estimator.
8686
"""
8787
with ops.name_scope(name, values=[value, loss, baseline]):
@@ -103,7 +103,7 @@ def get_score_function_with_advantage(advantage_fn=None,
103103
104104
Args:
105105
advantage_fn: callable that takes the `StochasticTensor` and the
106-
downstream `loss` and returns an `Output` advantage
106+
downstream `loss` and returns a `Tensor` advantage
107107
(e.g. `loss - baseline`).
108108
name: name to prepend ops with.
109109
@@ -125,7 +125,7 @@ def get_score_function_with_constant_baseline(baseline, name="ScoreFunction"):
125125
"""Score function estimator with constant baseline.
126126
127127
Args:
128-
baseline: `Output` to be subtracted from loss.
128+
baseline: `Tensor` to be subtracted from loss.
129129
name: name to prepend ops with.
130130
131131
Returns:
@@ -145,7 +145,7 @@ def get_score_function_with_baseline(baseline_fn=None, name="ScoreFunction"):
145145
146146
Args:
147147
baseline_fn: callable that takes the `StochasticTensor` and the downstream
148-
`loss` and returns an `Output` baseline to be subtracted from the `loss`.
148+
`loss` and returns a `Tensor` baseline to be subtracted from the `loss`.
149149
If None, defaults to `get_mean_baseline`, which is an EMA of the loss.
150150
name: name to prepend ops with.
151151

tensorflow/contrib/bayesflow/python/ops/stochastic_graph.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ def _stochastic_dependencies_map(fixed_losses, stochastic_tensors=None):
5454
"""Map stochastic tensors to the fixed losses that depend on them.
5555
5656
Args:
57-
fixed_losses: a list of `Output`s.
57+
fixed_losses: a list of `Tensor`s.
5858
stochastic_tensors: a list of `StochasticTensor`s to map to fixed losses.
5959
If `None`, all `StochasticTensor`s in the graph will be used.
6060
@@ -109,16 +109,16 @@ def surrogate_loss(sample_losses,
109109
dimensionality of 1 or greater. All losses should have the same shape.
110110
stochastic_tensors: a list of `StochasticTensor`s to add loss terms for.
111111
If None, defaults to all `StochasticTensor`s in the graph upstream of
112-
the `Output`s in `sample_losses`.
112+
the `Tensor`s in `sample_losses`.
113113
name: the name with which to prepend created ops.
114114
115115
Returns:
116-
`Output` loss, which is the sum of `sample_losses` and the
116+
`Tensor` loss, which is the sum of `sample_losses` and the
117117
`loss_fn`s returned by the `StochasticTensor`s.
118118
119119
Raises:
120120
TypeError: if `sample_losses` is not a list or tuple, or if its elements
121-
are not `Output`s.
121+
are not `Tensor`s.
122122
ValueError: if any loss in `sample_losses` does not have dimensionality 1
123123
or greater.
124124
"""

tensorflow/contrib/bayesflow/python/ops/stochastic_tensor.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -91,10 +91,10 @@ def loss(self, sample_loss):
9191
constant with respect to the input for purposes of the gradient.
9292
9393
Args:
94-
sample_loss: `Output`, sample loss downstream of this `StochasticTensor`.
94+
sample_loss: `Tensor`, sample loss downstream of this `StochasticTensor`.
9595
9696
Returns:
97-
Either `None` or an `Output`.
97+
Either `None` or a `Tensor`.
9898
"""
9999
raise NotImplementedError("surrogate_loss not implemented")
100100

@@ -301,7 +301,7 @@ def __init__(self,
301301
value type set with the `value_type` context manager will be used.
302302
loss_fn: callable that takes
303303
`(st, st.value(), influenced_loss)`, where
304-
`st` is this `StochasticTensor`, and returns an `Output` loss. By
304+
`st` is this `StochasticTensor`, and returns a `Tensor` loss. By
305305
default, `loss_fn` is the `score_function`, or more precisely, the
306306
integral of the score function, such that when the gradient is taken,
307307
the score function results. See the `stochastic_gradient_estimators`

0 commit comments

Comments
 (0)