Skip to content

Commit dc78b9e

Browse files
committed
style: Run pre-commit hooks
1 parent 9fed0b7 commit dc78b9e

37 files changed

Lines changed: 16145 additions & 15406 deletions

.gitignore

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -198,9 +198,9 @@ cython_debug/
198198
.abstra/
199199

200200
# Visual Studio Code
201-
# Visual Studio Code specific template is maintained in a separate VisualStudioCode.gitignore
201+
# Visual Studio Code specific template is maintained in a separate VisualStudioCode.gitignore
202202
# that can be found at https://github.com/github/gitignore/blob/main/Global/VisualStudioCode.gitignore
203-
# and can be added to the global gitignore or merged into this file. However, if you prefer,
203+
# and can be added to the global gitignore or merged into this file. However, if you prefer,
204204
# you could uncomment the following to ignore the entire vscode folder
205205
# .vscode/
206206

.pre-commit-config.yaml

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,4 +36,3 @@ repos:
3636
hooks:
3737
- id: black # Format code.
3838
args: [--line-length=100]
39-

LICENSE

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,4 +19,4 @@ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
1919
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
2020
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
2121
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
22-
SOFTWARE.
22+
SOFTWARE.

config/arch/hrm.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,4 +21,4 @@ puzzle_emb_ndim: ${.hidden_size}
2121
pos_encodings: rope
2222
forward_dtype: bfloat16
2323

24-
mlp_t: False # use mlp on L instead of transformer
24+
mlp_t: False # use mlp on L instead of transformer

config/arch/transformers_baseline.yaml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,12 +7,12 @@ halt_exploration_prob: 0.1
77
halt_max_steps: 16
88

99
H_cycles: 1 # kept for compatibility
10-
H_layers: 8
10+
H_layers: 8
1111

12-
hidden_size: 512
12+
hidden_size: 512
1313
num_heads: 12
1414
expansion: 4
1515

1616
puzzle_emb_ndim: ${.hidden_size}
1717

18-
pos_encodings: rope
18+
pos_encodings: rope

config/arch/trm.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,4 +23,4 @@ forward_dtype: bfloat16
2323

2424
mlp_t: False # use mlp on L instead of transformer
2525
puzzle_emb_len: 16 # if non-zero, its specified to this value
26-
no_ACT_continue: True # No continue ACT loss, only use the sigmoid of the halt which makes much more sense
26+
no_ACT_continue: True # No continue ACT loss, only use the sigmoid of the halt which makes much more sense

config/arch/trm_hier6.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,4 +23,4 @@ forward_dtype: bfloat16
2323

2424
mlp_t: False # use mlp on L instead of transformer
2525
puzzle_emb_len: 16 # if non-zero, its specified to this value
26-
no_ACT_continue: True # No continue ACT loss, only use the sigmoid of the halt which makes much more sense
26+
no_ACT_continue: True # No continue ACT loss, only use the sigmoid of the halt which makes much more sense

config/arch/trm_singlez.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,4 +23,4 @@ forward_dtype: bfloat16
2323

2424
mlp_t: False # use mlp on L instead of transformer
2525
puzzle_emb_len: 16 # if non-zero, its specified to this value
26-
no_ACT_continue: True # No continue ACT loss, only use the sigmoid of the halt which makes much more sense
26+
no_ACT_continue: True # No continue ACT loss, only use the sigmoid of the halt which makes much more sense

config/cfg_pretrain.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,4 +39,4 @@ min_eval_interval: 0 # when to start the eval
3939

4040
ema: False # use Exponential-Moving-Average
4141
ema_rate: 0.999 # EMA-rate
42-
freeze_weights: False # If True, freeze weights and only learn the embeddings
42+
freeze_weights: False # If True, freeze weights and only learn the embeddings

0 commit comments

Comments
 (0)