
Recherche avancée
Médias (1)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
Autres articles (41)
-
Pas question de marché, de cloud etc...
10 avril 2011Le vocabulaire utilisé sur ce site essaie d’éviter toute référence à la mode qui fleurit allègrement
sur le web 2.0 et dans les entreprises qui en vivent.
Vous êtes donc invité à bannir l’utilisation des termes "Brand", "Cloud", "Marché" etc...
Notre motivation est avant tout de créer un outil simple, accessible à pour tout le monde, favorisant
le partage de créations sur Internet et permettant aux auteurs de garder une autonomie optimale.
Aucun "contrat Gold ou Premium" n’est donc prévu, aucun (...) -
Librairies et binaires spécifiques au traitement vidéo et sonore
31 janvier 2010, parLes logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
Binaires complémentaires et facultatifs flvtool2 : (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (5488)
-
libavcodec/zmbvenc : motion estimation improvements/bug fixes :
7 février 2019, par Matthew Fearnleylibavcodec/zmbvenc : motion estimation improvements/bug fixes :
Clamp ME range to -64..63 (prevents corruption when me_range is too high)
Allow MV's up to *and including* the positive range limit
Allow out-of-edge ME by padding the prev buffer with a border of 0's
Try previous MV before checking the rest (improves speed in some cases)
More robust logic in code - ensure *mx,*my,*xored are updated together
-
FFmpeg : chromakey without green edges
5 septembre 2020, par IgniterI have a video of a person on green background and I'm trying to turn background transparent by this :


ffmpeg -i bg.mp4 -i man.mp4 -filter_complex '[1:v]colorkey=0x00ff00:0.3:0.3[ckout];[0:v][ckout]overlay[out]' -map '[out]' result.mp4



Colorkey gives this quite noticeable green edge around the person's figure.

Any attempts to increase opacity or blend parameters result in disappearing facial features.



Is there any smart way to change pure green
0x00ff00
pixels with transparent ones ?

-
ffmpeg/c++ Encode additional information of video frame with ffmpeg
29 janvier 2018, par 8793I am new with ffmpeg & video encoding, after looking for some related questions on this page, I found this post which is very useful to understand the overview process of ffmpeg.
However, my work not only needs to manipulate with Mat frame, after extract important information from video (extract edge, position of edge block, type of each edge block, block number, motion vector), I have to encode and send them to client. I tried to find an example code for this part but it seems nobody have done it before.
My problems is how to encode these additional information along with video frame, and send both to client. I read about Huffman Coding which can help lossless compression, But is it possible encode edge & motion data using huffman coding while encoding video frame using ffmpeg ? I’m doing experiment using udp protocol.
I can not find any information about this.
I read into metadata & side information in ffmpeg but it’s not what I want to do.I hope if you can give me an advice or a directions to research into this area, so I can understand and try to implement it. If there is any example code for this case, I would be very grateful for your sharing.
Thank you so much.
Below is encoder part on server side :
int encode(Mat& input_frame, EncodedCallback callback, void* userdata = nullptr) {
AVPacket pkt;
/* encode 1 second of video */
av_init_packet(&pkt);
pkt.data = NULL; // packet data will be allocated by the encoder
pkt.size = 0;
int size = 0;
fflush(stdout);
cvtFrame2AVFrameYUV420(input_frame, &frame);
static int time;
frame->pts = time++;
/* encode the image */
ret = avcodec_send_frame(c, frame);
if (ret < 0) {
fprintf(stderr, "Error avcodec_send_frame\n");
exit(1);
}
nbFramesEncoded++;
ret = avcodec_receive_packet(c, &pkt);
if (!isFirstFrameEmmited) {
nbNeededFramesInBuffer++;
printf("nbNeededFramesInBuffer: %d\n", nbNeededFramesInBuffer);
}
if (ret < 0) {
if (ret == -EAGAIN) {
//output is not available, we must send more input
} else {
fprintf(stderr, "Error avcodec_receive_packet %d\n", ret);
exit(1);
}
} else {
if (callback) {
callback(pkt, userdata);
}
size = pkt.size + 4;
av_packet_unref(&pkt);
}
return size;
}Below is code to handle frame processing (presently we check & send motioned block to client)
void updateFrame(Mat& frame) {
//Get all Streams ready
bool isReady = true;
if (!frameStreamer->encoder->isFirstFrameEmmited) {
frameStreamer->sendFrame(frame);
isReady = false;
}
for (int yidx = 0; yidx < gridSize.height; yidx++) {
for (int xidx = 0; xidx < gridSize.width; xidx++) {
StreamPtr& stream = streamGrid[yidx][xidx];
if (!stream->encoder->isFirstFrameEmmited) {
Mat block = frame(stream->irect);
stream->sendFrame(block);
isReady = false;
}
}
}
if (isReady == false) {
return;
}
if (pGray.empty()) {
frameStreamer->sendFrame(frame);
frameStreamer->sendFrame(frame);
cvtColor(frame, pGray, CV_BGR2GRAY);
return;
}
//Motion Detection
Mat gray;
cvtColor(frame, gray, CV_BGR2GRAY);
Mat diff;
absdiff(gray, pGray, diff);
threshold(diff, diff, NOISE_THRESHOLD, 255, CV_THRESH_BINARY);
if (HEAT_IMAGE) {
gray.copyTo(diff, diff);
imshow("Gray", gray);
threshold(diff, diff, HEAT_THRESH, 255, CV_THRESH_TOZERO);
}
if (USE_MORPH_NOISE) {
Morph_Noise(diff);
}
Mat motionImg = Mat::zeros(frameSize, CV_8UC3);
//Block Classification
int nbModifiedBlocks = 0;
for (int yidx = 0; yidx < gridSize.height; yidx++) {
for (int xidx = 0; xidx < gridSize.width; xidx++) {
Rect irect(xidx * blockSize.width, yidx * blockSize.height,
blockSize.width, blockSize.height);
int blockDiff = sum(diff(irect))[0];
if (blockDiff > BLOCK_THRESHOLD * 255) {
this->blockCls.at<uchar>(yidx, xidx) = MODI_BLOCK;
nbModifiedBlocks++;
} else {
this->blockCls.at<uchar>(yidx, xidx) = SKIP_BLOCK;
}
}
}
//Send
if (nbModifiedBlocks > this->nbBlocksThresh) {
nbSentBytes += this->frameStreamer->sendFrame(frame);
} else {
for (int yidx = 0; yidx < gridSize.height; yidx++) {
for (int xidx = 0; xidx < gridSize.width; xidx++) {
uchar cls = this->blockCls.at<uchar>(yidx, xidx);
StreamPtr& stream = streamGrid[yidx][xidx];
bool send = false;
if (cls == MODI_BLOCK) {
if (DEBUG_NETWORK) {
printf("Normal (%d, %d): ", xidx, yidx);
}
send = true;
stream->encoder->nbFramesBuffered = stream->encoder->nbNeededFramesInBuffer;
rectangle(motionImg, stream->irect, Scalar(0, 0, 255), CV_FILLED);
} else if (stream->encoder->nbFramesBuffered > 0) {
if (DEBUG_NETWORK) {
printf("Extra (%d, %d): ", xidx, yidx);
}
send = true;
stream->encoder->nbFramesBuffered--;
stream->encoder->nbFlushFrames++;
rectangle(motionImg, stream->irect, Scalar(0, 255, 0), CV_FILLED);
}
if (send) {
Mat block = frame(stream->irect);
nbSentBytes += stream->sendFrame(block);
gray(stream->irect).copyTo(pGray(stream->irect));
}
}
}
}
</uchar></uchar></uchar>}