
Recherche avancée
Médias (1)
-
The pirate bay depuis la Belgique
1er avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
Autres articles (48)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (9672)
-
How to use the -stats_period parameter together with FFMPEG -progress ?
30 janvier 2021, par Thiago FranklinI am using -pregress to get the current frame that is being processed, leaving it in a .txt file, but I would like to use -stats_period to control the time that the file is updated. However, when adding -stats_period to the script, it presents an error message :




Unrecognized option 'stats_period'. Error splitting the argument list :
Option not found




And I couldn't find any examples of use, neither in the forums, nor in the ffmpeg documentation.


-
Problem to write video using FFMPEG from a vector of cv::Mat
30 décembre 2022, par AlexI'm trying create two functions, one to read a video and store the frmes in a vector of cv::Mat, and other to get a vector of cv::Mat and write this vector in a video. The code compile and run without exception, but the video writed doesn't run, there are data inside, but VLC is not able to run the video. What am I doing wrong in function to write video ?


#include <iostream>
#include <string>
#include <vector>

#include <opencv2></opencv2>core/mat.hpp>
#include <opencv2></opencv2>imgcodecs.hpp>


extern "C" {
#include <libavformat></libavformat>avformat.h>
#include <libavutil></libavutil>imgutils.h>
#include <libswscale></libswscale>swscale.h>
#include <libavcodec></libavcodec>avcodec.h>
 #include <libavutil></libavutil>pixdesc.h>
#include <libavutil></libavutil>opt.h>
}

// helper function to check for FFmpeg errors
inline void checkError(int error, const std::string &message) {
 if (error < 0) {
 std::cerr << message << ": " << av_err2str(error) << std::endl;
 exit(EXIT_FAILURE);
 }
}
 

int writeVideo(const std::string& video_path, std::vector& frames, int width, int height, int fps) {
 // initialize FFmpeg
 av_log_set_level(AV_LOG_ERROR);
 avformat_network_init();

 // create the output video context
 AVFormatContext *formatContext = nullptr;
 int error = avformat_alloc_output_context2(&formatContext, nullptr, nullptr, video_path.c_str());
 checkError(error, "Error creating output context");

 // create the video stream
 AVStream *videoStream = avformat_new_stream(formatContext, nullptr);
 if (!videoStream) {
 std::cerr << "Error creating video stream" << std::endl;
 exit(EXIT_FAILURE);
 }

 // create the video codec context
 const AVCodec *videoCodec = avcodec_find_encoder(AV_CODEC_ID_MPEG4);
 AVCodecContext *videoCodecContext = avcodec_alloc_context3(videoCodec);
 if (!videoCodecContext) {
 std::cerr << "Error allocating video codec context" << std::endl;
 exit(EXIT_FAILURE);
 }
 videoCodecContext->bit_rate = 200000;
 videoCodecContext->width = width;
 videoCodecContext->height = height;
 videoCodecContext->time_base = (AVRational){ 1, fps };
 videoCodecContext->framerate = (AVRational){ fps, 1 };
 videoCodecContext->gop_size = 12;
 videoCodecContext->max_b_frames = 0;
 videoCodecContext->pix_fmt = AV_PIX_FMT_YUV420P;
 if (formatContext->oformat->flags & AVFMT_GLOBALHEADER) {
 videoCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
 }
 error = avcodec_open2(videoCodecContext, videoCodec, nullptr);
 checkError(error, "Error opening");
 error = avcodec_parameters_from_context(videoStream->codecpar, videoCodecContext);
 checkError(error, "Error setting video codec parameters");

 // open the output file
 error = avio_open(&formatContext->pb, video_path.c_str(), AVIO_FLAG_WRITE);
 checkError(error, "Error opening output file");

 // write the video file header
 error = avformat_write_header(formatContext, nullptr);
 checkError(error, "Error writing video file header");


 AVPacket *packet = av_packet_alloc();
 if (!packet) {
 std::cerr << "Error allocating packet" << std::endl;
 exit(EXIT_FAILURE);
 }
 for (const cv::Mat &frame : frames) {
 // convert the cv::Mat to an AVFrame
 AVFrame *avFrame = av_frame_alloc();
 avFrame->format = videoCodecContext->pix_fmt;
 avFrame->width = width;
 avFrame->height = height;
 error = av_frame_get_buffer(avFrame, 0);
 checkError(error, "Error allocating frame buffer");
 struct SwsContext *frameConverter = sws_getContext(width, height, AV_PIX_FMT_BGR24, width, height, videoCodecContext->pix_fmt, SWS_BICUBIC, nullptr, nullptr, nullptr);
 uint8_t *srcData[AV_NUM_DATA_POINTERS] = { frame.data };
 int srcLinesize[AV_NUM_DATA_POINTERS] = { static_cast<int>(frame.step) };
 sws_scale(frameConverter, srcData, srcLinesize, 0, height, avFrame->data, avFrame->linesize);
 sws_freeContext(frameConverter);

 // encode the AVFrame
 avFrame->pts = packet->pts;
 error = avcodec_send_frame(videoCodecContext, avFrame);
 checkError(error, "Error sending frame to video codec");
 while (error >= 0) {
 error = avcodec_receive_packet(videoCodecContext, packet);
 if (error == AVERROR(EAGAIN) || error == AVERROR_EOF) {
 break;
 }
 checkError(error, "Error encoding video frame");

 // write the encoded packet to the output file
 packet->stream_index = videoStream->index;
 error = av_interleaved_write_frame(formatContext, packet);
 checkError(error, "Error writing video packet");
 av_packet_unref(packet);
 }
 av_frame_free(&avFrame);
 }

 // clean up
 av_packet_free(&packet);
 avcodec_free_context(&videoCodecContext);
 avformat_free_context(formatContext);
 avformat_network_deinit();

 return EXIT_SUCCESS;
}

std::vector readVideo(const std::string video_path) {
 // initialize FFmpeg
 av_log_set_level(AV_LOG_ERROR);
 avformat_network_init();

 AVFormatContext* formatContext = nullptr;
 int error = avformat_open_input(&formatContext, video_path.c_str(), nullptr, nullptr);
 checkError(error, "Error opening input file");

 //Read packets of a media file to get stream information.
 
 error = avformat_find_stream_info(formatContext, nullptr);
 checkError(error, "Error avformat find stream info");
 
 // find the video stream
 AVStream* videoStream = nullptr;
 for (unsigned int i = 0; i < formatContext->nb_streams; i++) {
 if (formatContext->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO && !videoStream) {
 videoStream = formatContext->streams[i];
 }
 }
 if (!videoStream) {
 std::cerr << "Error: input file does not contain a video stream" << std::endl;
 exit(EXIT_FAILURE);
 }

 // create the video codec context
 const AVCodec* videoCodec = avcodec_find_decoder(videoStream->codecpar->codec_id);
 AVCodecContext* videoCodecContext = avcodec_alloc_context3(videoCodec);
 if (!videoCodecContext) {
 std::cerr << "Error allocating video codec context" << std::endl;
 exit(EXIT_FAILURE);
 }
 
 std::cout << "::informations::\n";
 std::cout << " bit_rate:" << videoCodecContext->bit_rate << "\n";
 std::cout << " width:" << videoCodecContext->width << "\n";
 std::cout << " height:" << videoCodecContext->height << "\n";
 std::cout << " gop_size:" << videoCodecContext->gop_size << "\n";
 std::cout << " max_b_frames:" << videoCodecContext->max_b_frames << "\n";
 std::cout << " pix_fmt:" << videoCodecContext->pix_fmt << "\n";
 
 error = avcodec_parameters_to_context(videoCodecContext, videoStream->codecpar);
 checkError(error, "Error setting video codec context parameters");
 error = avcodec_open2(videoCodecContext, videoCodec, nullptr);
 checkError(error, "Error opening video codec");

 // create the frame scaler
 int width = videoCodecContext->width;
 int height = videoCodecContext->height;
 struct SwsContext* frameScaler = sws_getContext(width, height, videoCodecContext->pix_fmt, width, height, AV_PIX_FMT_BGR24, SWS_BICUBIC, nullptr, nullptr, nullptr);

 // read the packets and decode the video frames
 std::vector videoFrames;
 AVPacket packet;
 while (av_read_frame(formatContext, &packet) == 0) {
 if (packet.stream_index == videoStream->index) {
 // decode the video frame
 AVFrame* frame = av_frame_alloc();
 int gotFrame = 0;
 error = avcodec_send_packet(videoCodecContext, &packet);
 checkError(error, "Error sending packet to video codec");
 error = avcodec_receive_frame(videoCodecContext, frame);

 //There is not enough data for decoding the frame, have to free and get more data
 
 if (error == AVERROR(EAGAIN))
 {
 av_frame_unref(frame);
 av_freep(frame);
 continue;
 }

 if (error == AVERROR_EOF)
 {
 std::cerr << "AVERROR_EOF" << std::endl;
 break;
 }

 checkError(error, "Error receiving frame from video codec");


 if (error == 0) {
 gotFrame = 1;
 }
 if (gotFrame) {
 // scale the frame to the desired format
 AVFrame* scaledFrame = av_frame_alloc();
 av_image_alloc(scaledFrame->data, scaledFrame->linesize, width, height, AV_PIX_FMT_BGR24, 32);
 sws_scale(frameScaler, frame->data, frame->linesize, 0, height, scaledFrame->data, scaledFrame->linesize);

 // copy the frame data to a cv::Mat object
 cv::Mat mat(height, width, CV_8UC3, scaledFrame->data[0], scaledFrame->linesize[0]);

 videoFrames.push_back(mat.clone());

 // clean up
 av_freep(&scaledFrame->data[0]);
 av_frame_free(&scaledFrame);
 }
 av_frame_free(&frame);
 }
 av_packet_unref(&packet);
 }


 // clean up
 sws_freeContext(frameScaler);
 avcodec_free_context(&videoCodecContext);
 avformat_close_input(&formatContext);
 return videoFrames;
}

int main() {
 auto videoFrames = readVideo("input.mp4");
 cv::imwrite("test.png", videoFrames[10]);
 writeVideo("outnow.mp4", videoFrames, 512, 608, 30);
 //writeVideo("outnow.mp4", videoFrames);
 return 0;
}
</int></vector></string></iostream>


-
Streaming Anki Vector's camera
25 novembre 2023, par Brendan GoodeI am trying to stream my robot to Remo.tv with my Vector robot. The website recognizes I am going live but does not stream what the robots camera is seeing. I have confirmed the camera works by a local application that runs the SDK. The very end of the code is what is giving issues, it appears somebody ripped code from Cozmo and attempted to paste it into a Vector file. The problem is it seems like the camera is taking pictures and we reach the point where it attempts to send photo but fails ?


# This is a dummy file to allow the automatic loading of modules without error on none.
import anki_vector
import atexit
import time
import _thread as thread
import logging
import networking

log = logging.getLogger('RemoTV.vector')
vector = None
reserve_control = None
robotKey = None
volume = 100 #this is stupid, but who cares
annotated = False

def connect():
 global vector
 global reserve_control

 log.debug("Connecting to Vector")
 vector = anki_vector.AsyncRobot()
 vector.connect()
 #reserve_control = anki_vector.behavior.ReserveBehaviorControl()
 
 atexit.register(exit)

 return(vector)

def exit():
 log.debug("Vector exiting")
 vector.disconnect()
 
def setup(robot_config):
 global forward_speed
 global turn_speed
 global volume
 global vector
 global charge_high
 global charge_low
 global stay_on_dock

 global robotKey
 global server
 global no_mic
 global no_camera
 global ffmpeg_location
 global v4l2_ctl_location
 global x_res
 global y_res
 
 robotKey = robot_config.get('robot', 'robot_key')

 if robot_config.has_option('misc', 'video_server'):
 server = robot_config.get('misc', 'video_server')
 else:
 server = robot_config.get('misc', 'server')
 
 no_mic = robot_config.getboolean('camera', 'no_mic')
 no_camera = robot_config.getboolean('camera', 'no_camera')

 ffmpeg_location = robot_config.get('ffmpeg', 'ffmpeg_location')
 v4l2_ctl_location = robot_config.get('ffmpeg', 'v4l2-ctl_location')

 x_res = robot_config.getint('camera', 'x_res')
 y_res = robot_config.getint('camera', 'y_res')


 if vector == None:
 vector = connect()

 #x mod_utils.repeat_task(30, check_battery, coz)

 if robot_config.has_section('cozmo'):
 forward_speed = robot_config.getint('cozmo', 'forward_speed')
 turn_speed = robot_config.getint('cozmo', 'turn_speed')
 volume = robot_config.getint('cozmo', 'volume')
 charge_high = robot_config.getfloat('cozmo', 'charge_high')
 charge_low = robot_config.getfloat('cozmo', 'charge_low')
 stay_on_dock = robot_config.getboolean('cozmo', 'stay_on_dock')

# if robot_config.getboolean('tts', 'ext_chat'): #ext_chat enabled, add motor commands
# extended_command.add_command('.anim', play_anim)
# extended_command.add_command('.forward_speed', set_forward_speed)
# extended_command.add_command('.turn_speed', set_turn_speed)
# extended_command.add_command('.vol', set_volume)
# extended_command.add_command('.charge', set_charging)
# extended_command.add_command('.stay', set_stay_on_dock)

 vector.audio.set_master_volume(volume) # set volume

 return
 
def move(args):
 global charging
 global low_battery
 command = args['button']['command']

 try:
 if vector.status.is_on_charger and not charging:
 if low_battery:
 print("Started Charging")
 charging = 1
 else:
 if not stay_on_dock:
 vector.drive_off_charger_contacts().wait_for_completed()

 if command == 'f':
 vector.behavior.say_text("Moving {}".format(command))

 #causes delays #coz.drive_straight(distance_mm(10), speed_mmps(50), False, True).wait_for_completed()
 vector.motors.set_wheel_motors(forward_speed, forward_speed, forward_speed*4, forward_speed*4 )
 time.sleep(0.7)
 vector.motors.set_wheel_motors(0, 0)
 elif command == 'b':
 #causes delays #coz.drive_straight(distance_mm(-10), speed_mmps(50), False, True).wait_for_completed()
 vector.motors.set_wheel_motors(-forward_speed, -forward_speed, -forward_speed*4, -forward_speed*4 )
 time.sleep(0.7)
 vector.motors.set_wheel_motors(0, 0)
 elif command == 'l':
 #causes delays #coz.turn_in_place(degrees(15), False).wait_for_completed()
 vector.motors.set_wheel_motors(-turn_speed, turn_speed, -turn_speed*4, turn_speed*4 )
 time.sleep(0.5)
 vector.motors.set_wheel_motors(0, 0)
 elif command == 'r':
 #causes delays #coz.turn_in_place(degrees(-15), False).wait_for_completed()
 vector.motors.set_wheel_motors(turn_speed, -turn_speed, turn_speed*4, -turn_speed*4 )
 time.sleep(0.5)
 vector.motors.set_wheel_motors(0, 0)

 #move lift
 elif command == 'w':
 vector.behavior.say_text("w")
 vector.set_lift_height(height=1).wait_for_completed()
 elif command == 's':
 vector.behavior.say_text("s")
 vector.set_lift_height(height=0).wait_for_completed()

 #look up down
 #-25 (down) to 44.5 degrees (up)
 elif command == 'q':
 #head_angle_action = coz.set_head_angle(degrees(0))
 #clamped_head_angle = head_angle_action.angle.degrees
 #head_angle_action.wait_for_completed()
 vector.behaviour.set_head_angle(45)
 time.sleep(0.35)
 vector.behaviour.set_head_angle(0)
 elif command == 'a':
 #head_angle_action = coz.set_head_angle(degrees(44.5))
 #clamped_head_angle = head_angle_action.angle.degrees
 #head_angle_action.wait_for_completed()
 vector.behaviour.set_head_angle(-22.0)
 time.sleep(0.35)
 vector.behaviour.set_head_angle(0)
 
 #things to say with TTS disabled
 elif command == 'sayhi':
 tts.say( "hi! I'm cozmo!" )
 elif command == 'saywatch':
 tts.say( "watch this" )
 elif command == 'saylove':
 tts.say( "i love you" )
 elif command == 'saybye':
 tts.say( "bye" )
 elif command == 'sayhappy':
 tts.say( "I'm happy" )
 elif command == 'saysad':
 tts.say( "I'm sad" )
 elif command == 'sayhowru':
 tts.say( "how are you?" )
 except:
 return(False)
 return

def start():
 log.debug("Starting Vector Video Process")
 try:
 thread.start_new_thread(video, ())
 except KeyboardInterrupt as e:
 pass 
 return
 
def video():
 global vector
 # Turn on image receiving by the camera
 vector.camera.init_camera_feed()

 vector.behavior.say_text("hey everyone, lets robot!")

 while True:
 time.sleep(0.25)

 from subprocess import Popen, PIPE
 from sys import platform

 log.debug("ffmpeg location : {}".format(ffmpeg_location))

# import os
# if not os.path.isfile(ffmpeg_location):
# print("Error: cannot find " + str(ffmpeg_location) + " check ffmpeg is installed. Terminating controller")
# thread.interrupt_main()
# thread.exit()

 while not networking.authenticated:
 time.sleep(1)

 p = Popen([ffmpeg_location, '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', '25', '-i', '-', '-vcodec', 'mpeg1video', '-r', '25', "-f", "mpegts", "-headers", "\"Authorization: Bearer {}\"".format(robotKey), "http://{}:1567/transmit?name={}-video".format(server, networking.channel_id)], stdin=PIPE)
 #p = Popen([ffmpeg_location, '-nostats', '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', '25', '-i', '-', '-vcodec', 'mpeg1video', '-r', '25','-b:v', '400k', "-f","mpegts", "-headers", "\"Authorization: Bearer {}\"".format(robotKey), "http://{}/transmit?name=rbot-390ddbe0-f1cc-4710-b3f1-9f477f4875f9-video".format(server)], stdin=PIPE)
 #p = Popen([ffmpeg_location, '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', '25', '-i', '-', '-vcodec', 'mpeg1video', '-r', '25', "-f", "mpegts", "-headers", "\"Authorization: Bearer {}\"".format(robotKey), "http://{}/transmit?name=rbot-390ddbe0-f1cc-4710-b3f1-9f477f4875f9-video".format(server, networking.channel_id)], stdin=PIPE)
 print(vector)
 image = vector.camera.latest_image
 image.raw_image.save("test.png", 'PNG')
 try:
 while True:
 if vector:
 image = vector.camera.latest_image
 if image:
 if annotated:
 image = image.annotate_image()
 else:
 image = image.raw_image
 print("attempting to write image")
 image.save(p.stdin, 'PNG')

 else:
 time.sleep(.1)
 log.debug("Lost Vector object, terminating video stream")
 p.stdin.close()
 p.wait()
 except Exception as e:
 log.debug("Vector Video Exception! {}".format(e))
 p.stdin.close()
 p.wait()
 pass 
 



Here is the error we get


[vost#0:0/mpeg1video @ 000001c7153c1cc0] Error submitting a packet to the muxer: Error number -10053 occurred
[out#0/mpegts @ 000001c713448480] Error muxing a packet
[out#0/mpegts @ 000001c713448480] Error writing trailer: Error number -10053 occurred
[http @ 000001c7134cab00] URL read error: Error number -10053 occurred
[out#0/mpegts @ 000001c713448480] Error closing file: Error number -10053 occurred
[out#0/mpegts @ 000001c713448480] video:56kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
frame= 25 fps=0.0 q=2.0 Lsize= 53kB time=00:00:01.32 bitrate= 325.9kbits/s speed=7.05x
Conversion failed!

attempting to write image



You can see our attempts to fix by commented out code in the #p section at the bottom.