Working with Pipelines
Last updated
Was this helpful?
Last updated
Was this helpful?
Note You can find the Default object on the Architect page in the Other section.
Note The minimum scheduling interval is 15 minutes.
Note You can find the Default object on the Architect page in the Other section.
Note You can't clone a pipeline using the command line interface (CLI).
Important You can't restore a pipeline after you delete it, so be sure that you won't need the pipeline in the future before you delete it.
Note
Staging only functions when the stage
field is set to true
on an activity, such as ShellCommandActivity
. For more information, see .
Note
This scenario only works as described if your data inputs and outputs are S3DataNode
objects. Additionally, output data staging is allowed only when directoryPath
is set on the output S3DataNode
object.
Note
This scenario only works as described if your data inputs and outputs are S3DataNode
or MySqlDataNode
objects. Table staging is not supported for DynamoDBDataNode
.
Note
In this example, the table name variable has the # (hash) character prefix because AWS Data Pipeline uses expressions to access the tableName
or directoryPath
. For more information about how expression evaluation works in AWS Data Pipeline, see .
Note On user-defined fields, AWS Data Pipeline only checks for valid references to other pipeline components, not any custom field string values that you add.
Note The following list includes regions in which AWS Data Pipeline can orchestrate workflows and launch Amazon EMR or Amazon EC2 resources. AWS Data Pipeline may not be supported in these regions. For information about regions in which AWS Data Pipeline is supported, see .
Note If you are not writing programs that interact with AWS Data Pipeline, you do not need to install any of the AWS SDKs. You can create and run pipelines using the console or command-line interface. For more information, see