Skip to content

Conversation

@ShangkunLi
Copy link
Collaborator

In this pr:

  1. Introduce two kinds of return ops in neura dataflow ir:

    • neura.return_void: its operand means the trigger values
    • neura.return_value: its operand means the returned values
      These two kinds of return ops are only enabled in dataflow ir. In conventional neural CDFG ir, we still use the neura.return.
  2. We create a canonicalize-return pass to mark the type (return_type = void/value) of neura.return. And

  3. We introduce some logic in the transform-ctrl-to-data-flow pass to transform the neura.return to neura.return_value/return_void

  4. A neura.yield is added at each flattened block's end to act like a terminator for the block. This is just to obey the rules in MLIR block https://mlir.llvm.org/docs/LangRef/#blocks.

TODO: Enable the interpreter for these two introduced ops. Since this pr is too large, I will make a separate pr for that.

Example:
neura.return_void %cond : !neura.data<i1, i1> // In dataflow mode

This is NOT a terminator - the block ends with neura.yield instead.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then where is yield? Can the neura.yield be also shown in above example?


// Looks for any suitable value in the predecessor block.
if (op.getNumResults() > 0) {
trigger_value = op.getResult(0);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Elaborate on the "suitable value"? This line seems any op can serve as trigger_value, is this sufficient?

builder.create<neura::CondBr>(
cond_br.getLoc(), cond_br.getCondition(), true_args, false_args,
cond_br.getTrueDest(), cond_br.getFalseDest());
cond_br.erase();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought we need to or all the cond_br's conditions?

// Handles cond_preds: adds a block argument for the trigger value, and
// updates each predecessor's terminator to pass the trigger value.
BlockArgument trigger_arg =
ret_block->addArgument(builder.getI1Type(), void_ret_op.getLoc());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

triggering value is getI1Type? I thought it is or of all conditions, and needs to be a regular type, i.e., <data, predicate>?

}

Value getPrecessedCondition(Value condition, bool is_not_condition,
Value getProcessedCondition(Value condition, bool is_not_condition,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you remind me why do we need this getProcessedCondition()?

llvm::errs() << "[ctrl2data] Converting neura.return operations to "
"return_void/value...\n";

for (neura::ReturnOp return_op : return_ops) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the reason that we can't have one pass (e.g., canonicalize_return_pass) to handle the transformation? Because canonicalize_return_pass touches condition, and condition would be transformed in ctrl2data_pass?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why remove this? Can't this test be updated/fixed?

//MAPPING: func.func @kernel
//MAPPING-SAME: accelerator = "neura", dataflow_mode = "predicate"
//MAPPING-SAME: mapping_info = {compiled_ii = 11 : i32, mapping_mode = "spatial-temporal", mapping_strategy = "heuristic", rec_mii = 9 : i32, res_mii = 5 : i32, x_tiles = 4 : i32, y_tiles = 4 : i32}
//MAPPING-SAME: mapping_info = {compiled_ii = 13 : i32, mapping_mode = "spatial-temporal", mapping_strategy = "heuristic", rec_mii = 9 : i32, res_mii = 6 : i32, x_tiles = 4 : i32, y_tiles = 4 : i32}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

II increased due to more cond ops are added?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants