Conversation
|
I didn't build the docs locally to check the Korean files. I only read the diff on GitHub, and the expressions looked fine to me. I will also check how the original content has changed and review it again later. If I find anything that needs to be fixed, I will open a new PR. Thank you for updating the Korean version together! 😄 |
|
|
||
| Implement a kernel that broadcast adds vector `a` and vector `b` and stores it in 2D matrix `output`. | ||
|
|
||
| **Broadcasting** in parallel programming refers to the operation where lower-dimensional arrays are automatically expanded to match the shape of higher-dimensional arrays during element-wise operations. Instead of physically replicating data in memory, values are logically repeated across the additional dimensions. For example, adding a 1D vector to each row (or column) of a 2D matrix applies the same vector elements repeatedly without creating multiple copies. |
There was a problem hiding this comment.
In switching to the TileTensor-primary file here, this drops the intro and broadcasting definition that we had in the previous non-TileTensor document. Should those be re-added into the TileTensor one to provide the intro content here?
There was a problem hiding this comment.
Good catch! Pushed a few commits to resolve this and your other comments.
tests/test_origin_fix.mojo
Outdated
There was a problem hiding this comment.
Sorry, this might also be another file left over from debugging during migration.
Migrate from LayoutTensor to TileTensor
This PR replaces all uses of the deprecated LayoutTensor API with TileTensor across problems, solutions, and documentation.
Code changes
Tests
Documentation (English + Korean)
pixi.toml