New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AssertionError: Could not compute output Tensor("ctc/Identity:0", shape=(None, 1), dtype=float32) #43966
Comments
@AsterLiu |
@Saduf2019 |
@AsterLiu |
|
|
@Saduf2019 |
@AsterLiu I am noticing different error and it is not a bug in TF. I think code is expecting 4 inputs whereas model.fit receives one input. Please check the gist here.
GitHub is mainly for bugs/performance related issue. Please post this kind of support questions in Stackoverflow where there is a larger community to help and support. Thanks! |
I'm trying out OCR projects using CRNN models. This error occurred when the program is executed to model.fit. I tried to use several methods to solve the problem, but nothing seemed to work.
I am using
Win10 OS,
Pycharm IDE 2020-02,
TF version 2.3.0.
Keras version 2.4.3
python version 3.7.7
and executed on my GPU-2080ti.
How can i fix it? Many thanks.
Due to the large amount of data, I only selected a small amount of data for the test. The details of the data are as follows.
image type is: <class 'numpy.ndarray'>
label type is: <class 'numpy.ndarray'>
image shape is: (9, 32, 320)
label shape is: (9, 21713)
image dtype is: float64
label dtype is: float64
The error detail as below.
Model: "model"
Layer (type) Output Shape Param # Connected to
image_input (InputLayer) [(None, 32, 320, 1)] 0
block1_conv1 (Conv2D) (None, 32, 320, 64) 640 image_input[0][0]
block1_conv2 (Conv2D) (None, 32, 320, 64) 36928 block1_conv1[0][0]
block1_pool (MaxPooling2D) (None, 16, 160, 64) 0 block1_conv2[0][0]
block2_conv1 (Conv2D) (None, 16, 160, 128) 73856 block1_pool[0][0]
block2_conv2 (Conv2D) (None, 16, 160, 128) 147584 block2_conv1[0][0]
block2_pool (MaxPooling2D) (None, 8, 80, 128) 0 block2_conv2[0][0]
block3_conv1 (Conv2D) (None, 8, 80, 256) 295168 block2_pool[0][0]
block3_conv2 (Conv2D) (None, 8, 80, 256) 590080 block3_conv1[0][0]
block3_conv3 (Conv2D) (None, 8, 80, 256) 590080 block3_conv2[0][0]
block3_conv4 (Conv2D) (None, 8, 80, 256) 590080 block3_conv3[0][0]
block3_pool (MaxPooling2D) (None, 4, 40, 256) 0 block3_conv4[0][0]
block4_conv1 (Conv2D) (None, 4, 40, 512) 1180160 block3_pool[0][0]
block4_conv2 (Conv2D) (None, 4, 40, 512) 2359808 block4_conv1[0][0]
block4_conv3 (Conv2D) (None, 4, 40, 512) 2359808 block4_conv2[0][0]
block4_conv4 (Conv2D) (None, 4, 40, 512) 2359808 block4_conv3[0][0]
block4_pool (MaxPooling2D) (None, 2, 20, 512) 0 block4_conv4[0][0]
block5_conv1 (Conv2D) (None, 2, 20, 512) 2359808 block4_pool[0][0]
block5_conv2 (Conv2D) (None, 2, 20, 512) 2359808 block5_conv1[0][0]
block5_conv3 (Conv2D) (None, 2, 20, 512) 2359808 block5_conv2[0][0]
block5_conv4 (Conv2D) (None, 2, 20, 512) 2359808 block5_conv3[0][0]
block5_pool (MaxPooling2D) (None, 1, 10, 512) 0 block5_conv4[0][0]
permute (Permute) (None, 10, 1, 512) 0 block5_pool[0][0]
timedistrib (TimeDistributed) (None, 10, 512) 0 permute[0][0]
bidirectional (Bidirectional) (None, 10, 1024) 4198400 timedistrib[0][0]
dense (Dense) (None, 10, 512) 524800 bidirectional[0][0]
bidirectional_1 (Bidirectional) (None, 10, 1024) 4198400 dense[0][0]
orc_out (Dense) (None, 10, 21713) 22255825 bidirectional_1[0][0]
the_labels (InputLayer) [(None, None)] 0
input_length (InputLayer) [(None, 1)] 0
label_length (InputLayer) [(None, 1)] 0
ctc (Lambda) (None, 1) 0 orc_out[0][0]
the_labels[0][0]
input_length[0][0]
label_length[0][0]
Total params: 51,200,657
Trainable params: 51,200,657
Non-trainable params: 0
Epoch 1/50
Traceback (most recent call last):
File "C:/Users/Administrator/Desktop/Python代码/test4.py", line 89, in
model_result = model.fit(train_images, train_labels, steps_per_epoch=10000, epochs=50, callbacks=callback_list, validation_data=(test_images, test_labels), validation_steps=50)
File "C:\Users\Administrator\Pythonproject\venv\lib\site-packages\tensorflow\python\keras\engine\training.py", line 66, in _method_wrapper
return method(self, *args, **kwargs)
File "C:\Users\Administrator\Pythonproject\venv\lib\site-packages\tensorflow\python\keras\engine\training.py", line 848, in fit
tmp_logs = train_function(iterator)
File "C:\Users\Administrator\Pythonproject\venv\lib\site-packages\tensorflow\python\eager\def_function.py", line 580, in call
result = self._call(*args, **kwds)
File "C:\Users\Administrator\Pythonproject\venv\lib\site-packages\tensorflow\python\eager\def_function.py", line 627, in _call
self._initialize(args, kwds, add_initializers_to=initializers)
File "C:\Users\Administrator\Pythonproject\venv\lib\site-packages\tensorflow\python\eager\def_function.py", line 506, in _initialize
*args, **kwds))
File "C:\Users\Administrator\Pythonproject\venv\lib\site-packages\tensorflow\python\eager\function.py", line 2446, in _get_concrete_function_internal_garbage_collected
graph_function, _, _ = self._maybe_define_function(args, kwargs)
File "C:\Users\Administrator\Pythonproject\venv\lib\site-packages\tensorflow\python\eager\function.py", line 2777, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "C:\Users\Administrator\Pythonproject\venv\lib\site-packages\tensorflow\python\eager\function.py", line 2667, in _create_graph_function
capture_by_value=self._capture_by_value),
File "C:\Users\Administrator\Pythonproject\venv\lib\site-packages\tensorflow\python\framework\func_graph.py", line 981, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "C:\Users\Administrator\Pythonproject\venv\lib\site-packages\tensorflow\python\eager\def_function.py", line 441, in wrapped_fn
return weak_wrapped_fn().wrapped(*args, **kwds)
File "C:\Users\Administrator\Pythonproject\venv\lib\site-packages\tensorflow\python\framework\func_graph.py", line 968, in wrapper
raise e.ag_error_metadata.to_exception(e)
AssertionError: in user code:
The text was updated successfully, but these errors were encountered: