问题
i have started a sagemaker job:
from sagemaker.tensorflow import TensorFlow
mytraining= TensorFlow(entry_point='model.py',
role=role,
train_instance_count=1,
train_instance_type='ml.p2.xlarge',
framework_version='2.0.0',
py_version='py3',
distributions={'parameter_server'{'enabled':False}})
training_data_uri ='s3://path/to/my/data'
mytraining.fit(training_data_uri,run_tensorboard_locally=True)
using run_tesorboard_locally=True
gave me
Tensorboard is not supported with script mode. You can run the following command: tensorboard --logdir None --host localhost --port 6006 This can be run from anywhere with access to the S3 URI used as the logdir.
It seems like i cant use it script mode, but I can access the logs of tensorboard in s3? But where are the logs in s3?
def _parse_args():
parser = argparse.ArgumentParser()
# Data, model, and output directories
# model_dir is always passed in from SageMaker. By default this is a S3 path under the default bucket.
parser.add_argument('--model_dir', type=str)
parser.add_argument('--sm-model-dir', type=str, default=os.environ.get('SM_MODEL_DIR'))
parser.add_argument('--train', type=str, default=os.environ.get('SM_CHANNEL_TRAINING'))
parser.add_argument('--hosts', type=list, default=json.loads(os.environ.get('SM_HOSTS')))
parser.add_argument('--current-host', type=str, default=os.environ.get('SM_CURRENT_HOST'))
return parser.parse_known_args()
if __name__ == "__main__":
args, unknown = _parse_args()
train_data, train_labels = load_training_data(args.train)
eval_data, eval_labels = load_testing_data(args.train)
mymodel= model(train_data, train_labels, eval_data, eval_labels)
if args.current_host == args.hosts[0]:
mymodel.save(os.path.join(args.sm_model_dir, '000000002/model.h5'))
similiar question is here :stack
EDIT i tried this new config but it doesnt work.
tensorboard_output_config = TensorBoardOutputConfig( s3_output_path='s3://PATH/to/my/bucket')
mytraining= TensorFlow(entry_point='model.py',
role=role,
train_instance_count=1,
train_instance_type='ml.p2.xlarge',
framework_version='2.0.0',
py_version='py3',
distributions={'parameter_server': {'enabled':False}},
tensorboard_output_config=tensorboard_output_config)
i added the callback in my model.py script that is actually what i use without sagemaker. As logdir i defined the default dir, where the TensoboardOutputConfig writes the data.. but it doesnt work. docs I also used it without the callback.
tensorboardCallback = tf.keras.callbacks.TensorBoard(
log_dir='/opt/ml/output/tensorboard',
histogram_freq=0,
# batch_size=32,ignored tf.2.0
write_graph=True,
write_grads=False,
write_images=False,
embeddings_freq=0,
embeddings_layer_names=None,
embeddings_metadata=None,
embeddings_data=None,
update_freq='batch')
回答1:
Difficult to debug what the exact root cause is in your case, but following steps worked for me. I started tensorboard inside the notebook instance manually.
Followed guide on sagemaker debugging to configure the
S3
output path for tensorboard logs.from sagemaker.debugger import TensorBoardOutputConfig tensorboard_output_config = TensorBoardOutputConfig( s3_output_path = 's3://bucket-name/tensorboard_log_folder/' ) estimator = TensorFlow(entry_point='train.py', source_dir='./', model_dir=model_dir, output_path= output_dir, train_instance_type=train_instance_type, train_instance_count=1, hyperparameters=hyperparameters, role=sagemaker.get_execution_role(), base_job_name='Testing-TrainingJob', framework_version='2.2', py_version='py37', script_mode=True, tensorboard_output_config=tensorboard_output_config) estimator.fit(inputs)
Start the tensorboard with the
S3
location provided above via a terminal on the notebook instance.$ tensorboard --logdir 's3://bucket-name/tensorboard_log_folder/'
Access the board via URL with
/proxy/6006/
. You need to update the notebook instance details in the following URL.https://myinstance.notebook.us-east-1.sagemaker.aws/proxy/6006/
来源:https://stackoverflow.com/questions/60839279/how-can-i-use-tensorboard-with-aws-sagemaker-tensorflow