-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
io.grpc.StatusRuntimeException: UNIMPLEMENTED error #3
Comments
Honestly, I don't remember what version of TensorFlow Serving was used (and sorry for not documenting it). As different TensorFlow versions sometimes creates issues (like other Python libraries etc.), this might be the reason. I would expect that you can find out the version by getting into the Docker container and using a specific command or checking the config files. I recommend to doublecheck with TensorFlow community to find out how you can find out which version is used. Having said this, running on hardware such as IBM PPC64 could also be the issue, of course. Especially if you get it working on Mac succesfully. |
@kaiwaehner - Thanks for your suggestions. I believe tensorflow serving server is started successfully on 9000 port on my ppc64 system. However, I feel I am getting problem in client code which I got from this git. Docker is having only serving and we need to compile client code once server is up in 9000 port.. So, when I am running client code, I am getting the error. |
For the client side, I would first try out the client I built with Maven: The pom.xml shows all dependencies and versions. |
Thanks @kaiwaehner . I see grpc version used 1.13.1. Will check if I can use same version. |
Hi @kaiwaehner - I tried 1.13.1 grpc version. Still seeing same issue. (srikanth) [root@brazossrik01 srikanth]# git clone https://github.com/grpc/grpc-java.git You are in 'detached HEAD' state. You can look around, make experimental If you want to create a new branch to retain commits you create, you may git checkout -b HEAD is now at 9a3e0705b Bump version to 1.13.1 Welcome to Gradle 4.7! Here are the highlights of this release:
For more details see https://docs.gradle.org/4.7/release-notes.html Starting a Gradle Daemon (subsequent builds will be faster)
BUILD SUCCESSFUL in 1m 17s (srikanth) [root@brazossrik01 bin]# echo -e "src/main/resources/example.jpg" | /root/srikanth/kafkacat-master/kafkacat -b localhost:9092 -P -t ImageInputTopic (srikanth) [root@brazossrik01 bin]# java -cp target/tensorflow-serving-java-grpc-kafka-streams-1.0-jar-with-dependencies.jar com.github.megachucky.kafka.streams.machinelearning.Kafka_Streams_TensorFlow_Serving_gRPC_Example Please let me know if there are any insights you can get for helping me. |
Sorry to hear this. Maybe it is really a OS issue then. I am sorry, but I don't know how to fix this. |
Hi,
I am planning to implement this on IBM PPC64 based systems. I got trouble in starting tensorflow-serving server, I got help from https://github.com/thammegowda/tensorflow-grpc-java and have tensorflow-serving running with inception inference model in place:
(srikanth) [root@brazossrik01 bin]# nohup tensorflow_model_server --model_name=inception --model_base_path=/root/srikanth/SERVING_INCEPTION/SERVING_INCEPTION --port=9000 2>&1 &
[1] 357
(srikanth) [root@brazossrik01 bin]# nohup: ignoring input and appending output to 'nohup.out'
(srikanth) [root@brazossrik01 bin]# ps -eaf |grep 9000
root 357 32464 27 23:03 pts/1 00:00:01 tensorflow_model_server --model_name=inception --model_base_path=/root/srikanth/SERVING_INCEPTION/SERVING_INCEPTION --port=9000
root 574 32464 0 23:03 pts/1 00:00:00 grep --color=auto 9000
(srikanth) [root@brazossrik01 bin]#
However, when I am trying to use the client code, I am getting UNIMPLEMENTED Error when I bind to kafakcat:
(srikanth) [root@brazossrik01 bin]# echo -e "src/main/resources/example.jpg" | /root/srikanth/kafkacat-master/kafkacat -b localhost:9092 -P -t ImageInputTopic
(srikanth) [root@brazossrik01 bin]#
(srikanth) [root@brazossrik01 bin]# java -cp target/tensorflow-serving-java-grpc-kafka-streams-1.0-jar-with-dependencies.jar com.github.megachucky.kafka.streams.machinelearning.Kafka_Streams_TensorFlow_Serving_gRPC_Example
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Image Recognition Microservice is running...
Input images arrive at Kafka topic ImageInputTopic; Output predictions going to Kafka topic ImageOutputTopic
Image path: src/main/resources/example.jpg
Image = src/main/resources/example.jpg
io.grpc.StatusRuntimeException: UNIMPLEMENTED
at io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:222)
at io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:203)
at io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:132)
at com.github.megachucky.kafka.streams.machinelearning.InceptionBlockingStub.classify(InceptionBlockingStub.java:63)
at com.github.megachucky.kafka.streams.machinelearning.TensorflowObjectRecogniser.recognise(TensorflowObjectRecogniser.java:66)
at com.github.megachucky.kafka.streams.machinelearning.Kafka_Streams_TensorFlow_Serving_gRPC_Example.lambda$main$0(Kafka_Streams_TensorFlow_Serving_gRPC_Example.java:91)
at org.apache.kafka.streams.kstream.internals.AbstractStream$2.apply(AbstractStream.java:111)
at org.apache.kafka.streams.kstream.internals.KStreamMapValues$KStreamMapProcessor.process(KStreamMapValues.java:40)
at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:115)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:146)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:129)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:93)
at org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:84)
at org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:351)
at org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.process(AssignedStreamsTasks.java:104)
at org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:413)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:862)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:777)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:747)
I got suggestion from Thammegowda that it could be a version mismatch. So, I wanted to check what is the version of tensorflow-serving you have used so that I try to use same version.
If you feel that it isn't version problem, please let me know if I can get any insights from you to solve this issue.
Thanks in advance for your kind support.
BTW - I am able to run this on my Mac OS by creating docker using Thammegowda's github. That isn't getting supported on PPC64 system as libraries are x86 compiled.
The text was updated successfully, but these errors were encountered: