
Recherche avancée
Médias (3)
-
Exemple de boutons d’action pour une collection collaborative
27 février 2013, par
Mis à jour : Mars 2013
Langue : français
Type : Image
-
Exemple de boutons d’action pour une collection personnelle
27 février 2013, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Collections - Formulaire de création rapide
19 février 2013, par
Mis à jour : Février 2013
Langue : français
Type : Image
Autres articles (112)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (9171)
-
How to get .mp4 videos from motion on a Raspberry Pi ?
3 novembre 2017, par MaartiI use motion on my laptop and it works perfectly in any format. But when I use it on my Raspberry Pi 3 (Raspbian Jessie) with the Raspberry Camera V2, the only formats that work are :
.avi
and.swf
.When I choose any other format, the output video is a "0 sec video" that is played and closed instantly.
I would like to have
.mp4
or.ogg
output so I can read it easily with HTML5.Here is the motion codec documentation.
Here is my config file :
############################################################
# Daemon
############################################################
# Start in daemon (background) mode and release terminal (default: off)
daemon on
# File to store the process ID, also called pid file. (default: not defined)
process_id_file /var/run/motion/motion.pid
############################################################
# Basic Setup Mode
############################################################
# Start in Setup-Mode, daemon disabled. (default: off)
setup_mode off
# Use a file to save logs messages, if not defined stderr and syslog is used. (default: not defined)
#logfile /mnt/camshare/Cam1/motion.log
logfile /tmp/motion.log
# Level of log messages [1..9] (EMR, ALR, CRT, ERR, WRN, NTC, INF, DBG, ALL). (default: 6 / NTC)
log_level 2
# Filter to log messages by type (COR, STR, ENC, NET, DBL, EVT, TRK, VID, ALL). (default: ALL)
log_type all
###########################################################
# Capture device options
############################################################
# Videodevice to be used for capturing (default /dev/video0)
# for FreeBSD default is /dev/bktr0
#videodevice /dev/video0
# v4l2_palette allows to choose preferable palette to be use by motion
# to capture from those supported by your videodevice. (default: 17)
# E.g. if your videodevice supports both V4L2_PIX_FMT_SBGGR8 and
# V4L2_PIX_FMT_MJPEG then motion will by default use V4L2_PIX_FMT_MJPEG.
# Setting v4l2_palette to 2 forces motion to use V4L2_PIX_FMT_SBGGR8
# instead.
#
# Values :
# V4L2_PIX_FMT_SN9C10X : 0 'S910'
# V4L2_PIX_FMT_SBGGR16 : 1 'BYR2'
# V4L2_PIX_FMT_SBGGR8 : 2 'BA81'
# V4L2_PIX_FMT_SPCA561 : 3 'S561'
# V4L2_PIX_FMT_SGBRG8 : 4 'GBRG'
# V4L2_PIX_FMT_SGRBG8 : 5 'GRBG'
# V4L2_PIX_FMT_PAC207 : 6 'P207'
# V4L2_PIX_FMT_PJPG : 7 'PJPG'
# V4L2_PIX_FMT_MJPEG : 8 'MJPEG'
# V4L2_PIX_FMT_JPEG : 9 'JPEG'
# V4L2_PIX_FMT_RGB24 : 10 'RGB3'
# V4L2_PIX_FMT_SPCA501 : 11 'S501'
# V4L2_PIX_FMT_SPCA505 : 12 'S505'
# V4L2_PIX_FMT_SPCA508 : 13 'S508'
# V4L2_PIX_FMT_UYVY : 14 'UYVY'
# V4L2_PIX_FMT_YUYV : 15 'YUYV'
# V4L2_PIX_FMT_YUV422P : 16 '422P'
# V4L2_PIX_FMT_YUV420 : 17 'YU12'
#
v4l2_palette 7
# Tuner device to be used for capturing using tuner as source (default /dev/tuner0)
# This is ONLY used for FreeBSD. Leave it commented out for Linux
; tunerdevice /dev/tuner0
# The video input to be used (default: -1)
# Should normally be set to 0 or 1 for video/TV cards, and -1 for USB cameras
input -1
# The video norm to use (only for video capture and TV tuner cards)
# Values: 0 (PAL), 1 (NTSC), 2 (SECAM), 3 (PAL NC no colour). Default: 0 (PAL)
norm 0
# The frequency to set the tuner to (kHz) (only for TV tuner cards) (default: 0)
frequency 0
# Rotate image this number of degrees. The rotation affects all saved images as
# well as movies. Valid values: 0 (default = no rotation), 90, 180 and 270.
rotate 0
# Image width (pixels). Valid range: Camera dependent, default: 352
#width 1024
width 640
# Image height (pixels). Valid range: Camera dependent, default: 288
#height 576
height 480
# Maximum number of frames to be captured per second.
# Valid range: 2-100. Default: 100 (almost no limit).
framerate 15
# Minimum time in seconds between capturing picture frames from the camera.
# Default: 0 = disabled - the capture rate is given by the camera framerate.
# This option is used when you want to capture images at a rate lower than 2 per second.
minimum_frame_time 0
# URL to use if you are using a network camera, size will be autodetected (incl http:// ftp:// mjpg:// or file:///)
# Must be a URL that returns single jpeg pictures or a raw mjpeg stream. Default: Not defined
;netcam_url http://127.0.0.1/cgi-bin/raspicam.sh
# Username and password for network camera (only if required). Default: not defined
# Syntax is user:password
; netcam_userpass value
# The setting for keep-alive of network socket, should improve performance on compatible net cameras.
# off: The historical implementation using HTTP/1.0, closing the socket after each http request.
# force: Use HTTP/1.0 requests with keep alive header to reuse the same connection.
# on: Use HTTP/1.1 requests that support keep alive as default.
# Default: off
netcam_keepalive off
# URL to use for a netcam proxy server, if required, e.g. "http://myproxy".
# If a port number other than 80 is needed, use "http://myproxy:1234".
# Default: not defined
; netcam_proxy value
# Set less strict jpeg checks for network cameras with a poor/buggy firmware.
# Default: off
netcam_tolerant_check off
# Let motion regulate the brightness of a video device (default: off).
# The auto_brightness feature uses the brightness option as its target value.
# If brightness is zero auto_brightness will adjust to average brightness value 128.
# Only recommended for cameras without auto brightness
auto_brightness off
# Set the initial brightness of a video device.
# If auto_brightness is enabled, this value defines the average brightness level
# which Motion will try and adjust to.
# Valid range 0-255, default 0 = disabled
brightness 0
# Set the contrast of a video device.
# Valid range 0-255, default 0 = disabled
contrast 0
# Set the saturation of a video device.
# Valid range 0-255, default 0 = disabled
saturation 0
# Set the hue of a video device (NTSC feature).
# Valid range 0-255, default 0 = disabled
hue 0
############################################################
# File "camera" support - read raw YUV data from a file
############################################################
#filecam_path /home/pi/test-cap/motion-mmal.capture
############################################################
# OpenMax/MMAL camera support for Raspberry Pi
############################################################
mmalcam_name vc.ril.camera
#mmalcam_control_params
#mmalcam_raw_capture_file /home/pi/motion-mmal.capture
# Switch this setting to "on" to use the still image mode of the Pi's camera
# instead of video. This gives a wider field of view, but requires
# a much slower frame-rate to achieve exposure stability
# (e.g. 0.25 fps or slower). You can use the minimum_frame_time
# parameter above to achieve this
mmalcam_use_still off
############################################################
# Round Robin (multiple inputs on same video device name)
############################################################
# Number of frames to capture in each roundrobin step (default: 1)
roundrobin_frames 1
# Number of frames to skip before each roundrobin step (default: 1)
roundrobin_skip 1
# Try to filter out noise generated by roundrobin (default: off)
switchfilter off
############################################################
# Motion Detection Settings:
############################################################
# Threshold for number of changed pixels in an image that
# triggers motion detection (default: 1500)
threshold 1500
# Automatically tune the threshold down if possible (default: off)
threshold_tune off
# Noise threshold for the motion detection (default: 32)
noise_level 32
# Automatically tune the noise threshold (default: on)
noise_tune on
# Despeckle motion image using (e)rode or (d)ilate or (l)abel (Default: not defined)
# Recommended value is EedDl. Any combination (and number of) of E, e, d, and D is valid.
# (l)abeling must only be used once and the 'l' must be the last letter.
# Comment out to disable
despeckle_filter EedDl
# Detect motion in predefined areas (1 - 9). Areas are numbered like that: 1 2 3
# A script (on_area_detected) is started immediately when motion is 4 5 6
# detected in one of the given areas, but only once during an event. 7 8 9
# One or more areas can be specified with this option. Take care: This option
# does NOT restrict detection to these areas! (Default: not defined)
; area_detect value
# PGM file to use as a sensitivity mask.
# Full path name to. (Default: not defined)
; mask_file value
# Dynamically create a mask file during operation (default: 0)
# Adjust speed of mask changes from 0 (off) to 10 (fast)
smart_mask_speed 0
# Ignore sudden massive light intensity changes given as a percentage of the picture
# area that changed intensity. Valid range: 0 - 100 , default: 0 = disabled
lightswitch 0
# Picture frames must contain motion at least the specified number of frames
# in a row before they are detected as true motion. At the default of 1, all
# motion is detected. Valid range: 1 to thousands, recommended 1-5
minimum_motion_frames 1
# Specifies the number of pre-captured (buffered) pictures from before motion
# was detected that will be output at motion detection.
# Recommended range: 0 to 5 (default: 0)
# Do not use large values! Large values will cause Motion to skip video frames and
# cause unsmooth movies. To smooth movies use larger values of post_capture instead.
pre_capture 2
# Number of frames to capture after motion is no longer detected (default: 0)
post_capture 2
# Event Gap is the seconds of no motion detection that triggers the end of an event.
# An event is defined as a series of motion images taken within a short timeframe.
# Recommended value is 60 seconds (Default). The value -1 is allowed and disables
# events causing all Motion to be written to one single movie file and no pre_capture.
# If set to 0, motion is running in gapless mode. Movies don't have gaps anymore. An
# event ends right after no more motion is detected and post_capture is over.
event_gap 60
# Maximum length in seconds of an mpeg movie
# When value is exceeded a new movie file is created. (Default: 0 = infinite)
# ATTENTION: when you're not using the motion build from the tutorial, it might fail with error 'Unknown config option "max_mpeg_time"'
# the use this line instead:
# max_movie_time 60
max_movie_time 60
# Always save images even if there was no motion (default: off)
emulate_motion off
############################################################
# Image File Output
############################################################
# Output 'normal' pictures when motion is detected (default: on)
# Valid values: on, off, first, best, center
# When set to 'first', only the first picture of an event is saved.
# Picture with most motion of an event is saved when set to 'best'.
# Picture with motion nearest center of picture is saved when set to 'center'.
# Can be used as preview shot for the corresponding movie.
output_pictures best
# Output pictures with only the pixels moving object (ghost images) (default: off)
output_debug_pictures off
# The quality (in percent) to be used by the jpeg compression (default: 75)
quality 75
# Type of output images
# Valid values: jpeg, ppm (default: jpeg)
picture_type jpeg
############################################################
# FFMPEG related options
# Film (movies) file output, and deinterlacing of the video input
# The options movie_filename and timelapse_filename are also used
# by the ffmpeg feature
############################################################
# Use ffmpeg to encode movies in realtime (default: off)
ffmpeg_output_movies on
# Use ffmpeg to make movies with only the pixels moving
# object (ghost images) (default: off)
ffmpeg_output_debug_movies off
# Use ffmpeg to encode a timelapse movie
# Default value 0 = off - else save frame every Nth second
ffmpeg_timelapse 0
# The file rollover mode of the timelapse video
# Valid values: hourly, daily (default), weekly-sunday, weekly-monday, monthly, manual
ffmpeg_timelapse_mode daily
# Bitrate to be used by the ffmpeg encoder (default: 400000)
# This option is ignored if ffmpeg_variable_bitrate is not 0 (disabled)
ffmpeg_bps 500000
# Enables and defines variable bitrate for the ffmpeg encoder.
# ffmpeg_bps is ignored if variable bitrate is enabled.
# Valid values: 0 (default) = fixed bitrate defined by ffmpeg_bps,
# or the range 2 - 31 where 2 means best quality and 31 is worst.
ffmpeg_variable_bitrate 5
# Codec to used by ffmpeg for the video compression.
# Timelapse mpegs are always made in mpeg1 format independent from this option.
# Supported formats are: mpeg1 (ffmpeg-0.4.8 only), mpeg4 (default), and msmpeg4.
# mpeg1 - gives you files with extension .mpg
# mpeg4 or msmpeg4 - gives you files with extension .avi
# msmpeg4 is recommended for use with Windows Media Player because
# it requires no installation of codec on the Windows client.
# swf - gives you a flash film with extension .swf
# flv - gives you a flash video with extension .flv
# ffv1 - FF video codec 1 for Lossless Encoding ( experimental )
# mov - QuickTime ( testing )
# ogg - Ogg/Theora ( testing )
#ffmpeg_video_codec msmpeg4
ffmpeg_video_codec mp4
# Use ffmpeg to deinterlace video. Necessary if you use an analog camera
# and see horizontal combing on moving objects in video or pictures.
# (default: off)
ffmpeg_deinterlace off
############################################################
# SDL Window
############################################################
# Number of motion thread to show in SDL Window (default: 0 = disabled)
#sdl_threadnr 0
############################################################
# External pipe to video encoder
# Replacement for FFMPEG builtin encoder for ffmpeg_output_movies only.
# The options movie_filename and timelapse_filename are also used
# by the ffmpeg feature
#############################################################
# Bool to enable or disable extpipe (default: off)
use_extpipe off
# External program (full path and opts) to pipe raw video to
# Generally, use '-' for STDIN...
;extpipe mencoder -demuxer rawvideo -rawvideo w=320:h=240:i420 -ovc x264 -x264encopts bframes=4:frameref=1:subq=1:scenecut=-1:nob_adapt:threads=1:keyint=1000:8x8dct:vbv_bufsize=4000:crf=24:partitions=i8x8,i4x4:vbv_maxrate=800:no-chroma-me -vf denoise3d=16:12:48:4,pp=lb -of avi -o %f.avi - -fps %fps
############################################################
# Snapshots (Traditional Periodic Webcam File Output)
############################################################
# Make automated snapshot every N seconds (default: 0 = disabled)
snapshot_interval 0
############################################################
# Text Display
# %Y = year, %m = month, %d = date,
# %H = hour, %M = minute, %S = second, %T = HH:MM:SS,
# %v = event, %q = frame number, %t = thread (camera) number,
# %D = changed pixels, %N = noise level, \n = new line,
# %i and %J = width and height of motion area,
# %K and %L = X and Y coordinates of motion center
# %C = value defined by text_event - do not use with text_event!
# You can put quotation marks around the text to allow
# leading spaces
############################################################
# Locate and draw a box around the moving object.
# Valid values: on, off, preview (default: off)
# Set to 'preview' will only draw a box in preview_shot pictures.
locate_motion_mode off
# Set the look and style of the locate box if enabled.
# Valid values: box, redbox, cross, redcross (default: box)
# Set to 'box' will draw the traditional box.
# Set to 'redbox' will draw a red box.
# Set to 'cross' will draw a little cross to mark center.
# Set to 'redcross' will draw a little red cross to mark center.
locate_motion_style box
# Draws the timestamp using same options as C function strftime(3)
# Default: %Y-%m-%d\n%T = date in ISO format and time in 24 hour clock
# Text is placed in lower right corner
text_right %d.%m.%Y\n%T
# Draw a user defined text on the images using same options as C function strftime(3)
# Default: Not defined = no text
# Text is placed in lower left corner
; text_left CAMERA %t
text_left HofCam
# Draw the number of changed pixed on the images (default: off)
# Will normally be set to off except when you setup and adjust the motion settings
# Text is placed in upper right corner
text_changes off
# This option defines the value of the special event conversion specifier %C
# You can use any conversion specifier in this option except %C. Date and time
# values are from the timestamp of the first image in the current event.
# Default: %Y%m%d%H%M%S
# The idea is that %C can be used filenames and text_left/right for creating
# a unique identifier for each event.
text_event %Y%m%d%H%M%S
# Draw characters at twice normal size on images. (default: off)
text_double on
# Text to include in a JPEG EXIF comment
# May be any text, including conversion specifiers.
# The EXIF timestamp is included independent of this text.
;exif_text %i%J/%K%L
############################################################
# Target Directories and filenames For Images And Films
# For the options snapshot_, picture_, movie_ and timelapse_filename
# you can use conversion specifiers
# %Y = year, %m = month, %d = date,
# %H = hour, %M = minute, %S = second,
# %v = event, %q = frame number, %t = thread (camera) number,
# %D = changed pixels, %N = noise level,
# %i and %J = width and height of motion area,
# %K and %L = X and Y coordinates of motion center
# %C = value defined by text_event
# Quotation marks round string are allowed.
############################################################
# Target base directory for pictures and films
# Recommended to use absolute path. (Default: current working directory)
target_dir /home/pi
# File path for snapshots (jpeg or ppm) relative to target_dir
# Default: %v-%Y%m%d%H%M%S-snapshot
# Default value is equivalent to legacy oldlayout option
# For Motion 3.0 compatible mode choose: %Y/%m/%d/%H/%M/%S-snapshot
# File extension .jpg or .ppm is automatically added so do not include this.
# Note: A symbolic link called lastsnap.jpg created in the target_dir will always
# point to the latest snapshot, unless snapshot_filename is exactly 'lastsnap'
snapshot_filename %v-%Y%m%d%H%M%S-snapshot
# File path for motion triggered images (jpeg or ppm) relative to target_dir
# Default: %v-%Y%m%d%H%M%S-%q
# Default value is equivalent to legacy oldlayout option
# For Motion 3.0 compatible mode choose: %Y/%m/%d/%H/%M/%S-%q
# File extension .jpg or .ppm is automatically added so do not include this
# Set to 'preview' together with best-preview feature enables special naming
# convention for preview shots. See motion guide for details
picture_filename %v-%Y%m%d%H%M%S-%q
# File path for motion triggered ffmpeg films (movies) relative to target_dir
# Default: %v-%Y%m%d%H%M%S
# Default value is equivalent to legacy oldlayout option
# For Motion 3.0 compatible mode choose: %Y/%m/%d/%H%M%S
# File extension .mpg or .avi is automatically added so do not include this
# This option was previously called ffmpeg_filename
movie_filename %v-%Y%m%d%H%M%S
# File path for timelapse movies relative to target_dir
# Default: %Y%m%d-timelapse
# Default value is near equivalent to legacy oldlayout option
# For Motion 3.0 compatible mode choose: %Y/%m/%d-timelapse
# File extension .mpg is automatically added so do not include this
timelapse_filename %Y%m%d-timelapse
############################################################
# Global Network Options
############################################################
# Enable or disable IPV6 for http control and stream (default: off )
ipv6_enabled off
############################################################
# Live Stream Server
############################################################
# The mini-http server listens to this port for requests (default: 0 = disabled)
stream_port 8080
# Quality of the jpeg (in percent) images produced (default: 50)
stream_quality 50
# Output frames at 1 fps when no motion is detected and increase to the
# rate given by stream_maxrate when motion is detected (default: off)
stream_motion on
# Maximum framerate for stream streams (default: 1)
stream_maxrate 4
# Restrict stream connections to localhost only (default: on)
stream_localhost off
# Limits the number of images per connection (default: 0 = unlimited)
# Number can be defined by multiplying actual stream rate by desired number of seconds
# Actual stream rate is the smallest of the numbers framerate and stream_maxrate
stream_limit 0
# Set the authentication method (default: 0)
# 0 = disabled
# 1 = Basic authentication
# 2 = MD5 digest (the safer authentication)
stream_auth_method 0
# Authentication for the stream. Syntax username:password
# Default: not defined (Disabled)
; stream_authentication username:password
############################################################
# HTTP Based Control
############################################################
# TCP/IP port for the http server to listen on (default: 0 = disabled)
webcontrol_port 8081
# Restrict control connections to localhost only (default: on)
webcontrol_localhost off
# Output for http server, select off to choose raw text plain (default: on)
webcontrol_html_output on
# Authentication for the http based control. Syntax username:password
# Default: not defined (Disabled)
; webcontrol_authentication username:password
############################################################
# Tracking (Pan/Tilt)
#############################################################
# Type of tracker (0=none (default), 1=stepper, 2=iomojo, 3=pwc, 4=generic, 5=uvcvideo, 6=servo)
# The generic type enables the definition of motion center and motion size to
# be used with the conversion specifiers for options like on_motion_detected
track_type 0
# Enable auto tracking (default: off)
track_auto off
# Serial port of motor (default: none)
;track_port /dev/ttyS0
# Motor number for x-axis (default: 0)
;track_motorx 0
# Set motorx reverse (default: 0)
;track_motorx_reverse 0
# Motor number for y-axis (default: 0)
;track_motory 1
# Set motory reverse (default: 0)
;track_motory_reverse 0
# Maximum value on x-axis (default: 0)
;track_maxx 200
# Minimum value on x-axis (default: 0)
;track_minx 50
# Maximum value on y-axis (default: 0)
;track_maxy 200
# Minimum value on y-axis (default: 0)
;track_miny 50
# Center value on x-axis (default: 0)
;track_homex 128
# Center value on y-axis (default: 0)
;track_homey 128
# ID of an iomojo camera if used (default: 0)
track_iomojo_id 0
# Angle in degrees the camera moves per step on the X-axis
# with auto-track (default: 10)
# Currently only used with pwc type cameras
track_step_angle_x 10
[...] -
Video creation with a recent ffmpeg API (2017)
16 novembre 2017, par ar2015I have started learning how to work with
ffmpeg
which has a suffering deprecation of all tutorial and available examples such as this.I am looking for a code which creates an output video.
Unfortunately, most of good examples are focusing on reading from a file rather than creating one.
Here, I have found a deprecated example and I spent a long time to fix its errors until it became like this :
#include <iostream>
#include
#include
#include <string>
extern "C" {
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libswscale></libswscale>swscale.h>
#include <libavformat></libavformat>avio.h>
#include <libavutil></libavutil>opt.h>
}
#define WIDTH 800
#define HEIGHT 480
#define STREAM_NB_FRAMES ((int)(STREAM_DURATION * FRAME_RATE))
#define FRAME_RATE 24
#define PIXEL_FORMAT AV_PIX_FMT_YUV420P
#define STREAM_DURATION 5.0 //seconds
#define BIT_RATE 400000
#define AV_CODEC_FLAG_GLOBAL_HEADER (1 << 22)
#define CODEC_FLAG_GLOBAL_HEADER AV_CODEC_FLAG_GLOBAL_HEADER
#define AVFMT_RAWPICTURE 0x0020
using namespace std;
static int sws_flags = SWS_BICUBIC;
AVFrame *picture, *tmp_picture;
uint8_t *video_outbuf;
int frame_count, video_outbuf_size;
/****** IF LINUX ******/
inline int sprintf_s(char* buffer, size_t sizeOfBuffer, const char* format, ...)
{
va_list ap;
va_start(ap, format);
int result = vsnprintf(buffer, sizeOfBuffer, format, ap);
va_end(ap);
return result;
}
/****** IF LINUX ******/
template
inline int sprintf_s(char (&buffer)[sizeOfBuffer], const char* format, ...)
{
va_list ap;
va_start(ap, format);
int result = vsnprintf(buffer, sizeOfBuffer, format, ap);
va_end(ap);
return result;
}
static void closeVideo(AVFormatContext *oc, AVStream *st)
{
avcodec_close(st->codec);
av_free(picture->data[0]);
av_free(picture);
if (tmp_picture)
{
av_free(tmp_picture->data[0]);
av_free(tmp_picture);
}
av_free(video_outbuf);
}
static AVFrame *alloc_picture(enum AVPixelFormat pix_fmt, int width, int height)
{
AVFrame *picture;
uint8_t *picture_buf;
int size;
picture = av_frame_alloc();
if(!picture)
return NULL;
size = avpicture_get_size(pix_fmt, width, height);
picture_buf = (uint8_t*)(av_malloc(size));
if (!picture_buf)
{
av_free(picture);
return NULL;
}
avpicture_fill((AVPicture *) picture, picture_buf, pix_fmt, WIDTH, HEIGHT);
return picture;
}
static void openVideo(AVFormatContext *oc, AVStream *st)
{
AVCodec *codec;
AVCodecContext *c;
c = st->codec;
if(c->idct_algo == AV_CODEC_ID_H264)
av_opt_set(c->priv_data, "preset", "slow", 0);
codec = avcodec_find_encoder(c->codec_id);
if(!codec)
{
std::cout << "Codec not found." << std::endl;
std::cin.get();std::cin.get();exit(1);
}
if(codec->id == AV_CODEC_ID_H264)
av_opt_set(c->priv_data, "preset", "medium", 0);
if(avcodec_open2(c, codec, NULL) < 0)
{
std::cout << "Could not open codec." << std::endl;
std::cin.get();std::cin.get();exit(1);
}
video_outbuf = NULL;
if(!(oc->oformat->flags & AVFMT_RAWPICTURE))
{
video_outbuf_size = 200000;
video_outbuf = (uint8_t*)(av_malloc(video_outbuf_size));
}
picture = alloc_picture(c->pix_fmt, c->width, c->height);
if(!picture)
{
std::cout << "Could not allocate picture" << std::endl;
std::cin.get();exit(1);
}
tmp_picture = NULL;
if(c->pix_fmt != AV_PIX_FMT_YUV420P)
{
tmp_picture = alloc_picture(AV_PIX_FMT_YUV420P, WIDTH, HEIGHT);
if(!tmp_picture)
{
std::cout << " Could not allocate temporary picture" << std::endl;
std::cin.get();exit(1);
}
}
}
static AVStream* addVideoStream(AVFormatContext *context, enum AVCodecID codecID)
{
AVCodecContext *codec;
AVStream *stream;
stream = avformat_new_stream(context, NULL);
if(!stream)
{
std::cout << "Could not alloc stream." << std::endl;
std::cin.get();exit(1);
}
codec = stream->codec;
codec->codec_id = codecID;
codec->codec_type = AVMEDIA_TYPE_VIDEO;
// sample rate
codec->bit_rate = BIT_RATE;
// resolution must be a multiple of two
codec->width = WIDTH;
codec->height = HEIGHT;
codec->time_base.den = FRAME_RATE; // stream fps
codec->time_base.num = 1;
codec->gop_size = 12; // intra frame every twelve frames at most
codec->pix_fmt = PIXEL_FORMAT;
if(codec->codec_id == AV_CODEC_ID_MPEG2VIDEO)
codec->max_b_frames = 2; // for testing, B frames
if(codec->codec_id == AV_CODEC_ID_MPEG1VIDEO)
codec->mb_decision = 2;
if(context->oformat->flags & AVFMT_GLOBALHEADER)
codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
return stream;
}
static void fill_yuv_image(AVFrame *pict, int frame_index, int width, int height)
{
int x, y, i;
i = frame_index;
/* Y */
for(y=0;ydata[0][y * pict->linesize[0] + x] = x + y + i * 3;
}
}
/* Cb and Cr */
for(y=0;y<height></height>2;y++) {
for(x=0;x<width></width>2;x++) {
pict->data[1][y * pict->linesize[1] + x] = 128 + y + i * 2;
pict->data[2][y * pict->linesize[2] + x] = 64 + x + i * 5;
}
}
}
static void write_video_frame(AVFormatContext *oc, AVStream *st)
{
int out_size, ret;
AVCodecContext *c;
static struct SwsContext *img_convert_ctx;
c = st->codec;
if(frame_count >= STREAM_NB_FRAMES)
{
}
else
{
if(c->pix_fmt != AV_PIX_FMT_YUV420P)
{
if(img_convert_ctx = NULL)
{
img_convert_ctx = sws_getContext(WIDTH, HEIGHT, AV_PIX_FMT_YUV420P, WIDTH, HEIGHT,
c->pix_fmt, sws_flags, NULL, NULL, NULL);
if(img_convert_ctx == NULL)
{
std::cout << "Cannot initialize the conversion context" << std::endl;
std::cin.get();exit(1);
}
}
fill_yuv_image(tmp_picture, frame_count, WIDTH, HEIGHT);
sws_scale(img_convert_ctx, tmp_picture->data, tmp_picture->linesize, 0, HEIGHT,
picture->data, picture->linesize);
}
else
{
fill_yuv_image(picture, frame_count, WIDTH, HEIGHT);
}
}
if (oc->oformat->flags & AVFMT_RAWPICTURE)
{
/* raw video case. The API will change slightly in the near
futur for that */
AVPacket pkt;
av_init_packet(&pkt);
pkt.flags |= AV_PKT_FLAG_KEY;
pkt.stream_index= st->index;
pkt.data= (uint8_t *)picture;
pkt.size= sizeof(AVPicture);
ret = av_interleaved_write_frame(oc, &pkt);
}
else
{
/* encode the image */
out_size = avcodec_encode_video(c, video_outbuf, video_outbuf_size, picture);
/* if zero size, it means the image was buffered */
if (out_size > 0)
{
AVPacket pkt;
av_init_packet(&pkt);
if (c->coded_frame->pts != AV_NOPTS_VALUE)
pkt.pts= av_rescale_q(c->coded_frame->pts, c->time_base, st->time_base);
if(c->coded_frame->key_frame)
pkt.flags |= AV_PKT_FLAG_KEY;
pkt.stream_index= st->index;
pkt.data= video_outbuf;
pkt.size= out_size;
/* write the compressed frame in the media file */
ret = av_interleaved_write_frame(oc, &pkt);
} else {
ret = 0;
}
}
if (ret != 0) {
std::cout << "Error while writing video frames" << std::endl;
std::cin.get();exit(1);
}
frame_count++;
}
int main ( int argc, char *argv[] )
{
const char* filename = "test.h264";
AVOutputFormat *outputFormat;
AVFormatContext *context;
AVCodecContext *codec;
AVStream *videoStream;
double videoPTS;
// init libavcodec, register all codecs and formats
av_register_all();
// auto detect the output format from the name
outputFormat = av_guess_format(NULL, filename, NULL);
if(!outputFormat)
{
std::cout << "Cannot guess output format! Using mpeg!" << std::endl;
std::cin.get();
outputFormat = av_guess_format(NULL, "h263" , NULL);
}
if(!outputFormat)
{
std::cout << "Could not find suitable output format." << std::endl;
std::cin.get();exit(1);
}
context = avformat_alloc_context();
if(!context)
{
std::cout << "Cannot allocate avformat memory." << std::endl;
std::cin.get();exit(1);
}
context->oformat = outputFormat;
sprintf_s(context->filename, sizeof(context->filename), "%s", filename);
std::cout << "Is '" << context->filename << "' = '" << filename << "'" << std::endl;
videoStream = NULL;
outputFormat->audio_codec = AV_CODEC_ID_NONE;
videoStream = addVideoStream(context, outputFormat->video_codec);
/* still needed?
if(av_set_parameters(context, NULL) < 0)
{
std::cout << "Invalid output format parameters." << std::endl;
exit(0);
}*/
av_dump_format(context, 0, filename, 1);
if(videoStream)
openVideo(context, videoStream);
if(!outputFormat->flags & AVFMT_NOFILE)
{
if(avio_open(&context->pb, filename, AVIO_FLAG_READ_WRITE) < 0)
{
std::cout << "Could not open " << filename << std::endl;
std::cin.get();exit(1);
}
}
avformat_write_header(context, 0);
while(true)
{
if(videoStream)
videoPTS = (double) videoStream->pts.val * videoStream->time_base.num / videoStream->time_base.den;
else
videoPTS = 0.;
if((!videoStream || videoPTS >= STREAM_DURATION))
{
break;
}
write_video_frame(context, videoStream);
}
av_write_trailer(context);
if(videoStream)
closeVideo(context, videoStream);
for(int i = 0; i < context->nb_streams; i++)
{
av_freep(&context->streams[i]->codec);
av_freep(&context->streams[i]);
}
if(!(outputFormat->flags & AVFMT_NOFILE))
{
avio_close(context->pb);
}
av_free(context);
std::cin.get();
return 0;
}
</string></iostream>Compile :
g++ -I ./FFmpeg/ video.cpp -L fflibs -lavcodec -lavformat
The code comes with two errors :
video.cpp:249:84: error: ‘avcodec_encode_video’ was not declared in this scope
out_size = avcodec_encode_video(c, video_outbuf, video_outbuf_size, picture);
^
video.cpp: In function ‘int main(int, char**)’:
video.cpp:342:46: error: ‘AVStream {aka struct AVStream}’ has no member named ‘pts’
videoPTS = (double) videoStream->pts.val * videoStream->time_base.num / videoStream->time_base.den;
^and a huge number of warnings for deprecation.
video.cpp: In function ‘void closeVideo(AVFormatContext*, AVStream*)’:
video.cpp:60:23: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
avcodec_close(st->codec);
^
In file included from video.cpp:9:0:
./FFmpeg/libavformat/avformat.h:876:21: note: declared here
AVCodecContext *codec;
^
video.cpp:60:23: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
avcodec_close(st->codec);
^
In file included from video.cpp:9:0:
./FFmpeg/libavformat/avformat.h:876:21: note: declared here
AVCodecContext *codec;
^
video.cpp:60:23: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
avcodec_close(st->codec);
^
In file included from video.cpp:9:0:
./FFmpeg/libavformat/avformat.h:876:21: note: declared here
AVCodecContext *codec;
^
video.cpp: In function ‘AVFrame* alloc_picture(AVPixelFormat, int, int)’:
video.cpp:80:12: warning: ‘int avpicture_get_size(AVPixelFormat, int, int)’ is deprecated [-Wdeprecated-declarations]
size = avpicture_get_size(pix_fmt, width, height);
^
In file included from video.cpp:8:0:
./FFmpeg/libavcodec/avcodec.h:5228:5: note: declared here
int avpicture_get_size(enum AVPixelFormat pix_fmt, int width, int height);
^
video.cpp:80:12: warning: ‘int avpicture_get_size(AVPixelFormat, int, int)’ is deprecated [-Wdeprecated-declarations]
size = avpicture_get_size(pix_fmt, width, height);
^
In file included from video.cpp:8:0:
./FFmpeg/libavcodec/avcodec.h:5228:5: note: declared here
int avpicture_get_size(enum AVPixelFormat pix_fmt, int width, int height);
^
video.cpp:80:53: warning: ‘int avpicture_get_size(AVPixelFormat, int, int)’ is deprecated [-Wdeprecated-declarations]
size = avpicture_get_size(pix_fmt, width, height);
^
In file included from video.cpp:8:0:
./FFmpeg/libavcodec/avcodec.h:5228:5: note: declared here
int avpicture_get_size(enum AVPixelFormat pix_fmt, int width, int height);
^
video.cpp:87:5: warning: ‘int avpicture_fill(AVPicture*, const uint8_t*, AVPixelFormat, int, int)’ is deprecated [-Wdeprecated-declarations]
avpicture_fill((AVPicture *) picture, picture_buf, pix_fmt, WIDTH, HEIGHT);
^
In file included from video.cpp:8:0:
./FFmpeg/libavcodec/avcodec.h:5213:5: note: declared here
int avpicture_fill(AVPicture *picture, const uint8_t *ptr,
^
video.cpp:87:5: warning: ‘int avpicture_fill(AVPicture*, const uint8_t*, AVPixelFormat, int, int)’ is deprecated [-Wdeprecated-declarations]
avpicture_fill((AVPicture *) picture, picture_buf, pix_fmt, WIDTH, HEIGHT);
^
In file included from video.cpp:8:0:
./FFmpeg/libavcodec/avcodec.h:5213:5: note: declared here
int avpicture_fill(AVPicture *picture, const uint8_t *ptr,
^
video.cpp:87:78: warning: ‘int avpicture_fill(AVPicture*, const uint8_t*, AVPixelFormat, int, int)’ is deprecated [-Wdeprecated-declarations]
avpicture_fill((AVPicture *) picture, picture_buf, pix_fmt, WIDTH, HEIGHT);
^
In file included from video.cpp:8:0:
./FFmpeg/libavcodec/avcodec.h:5213:5: note: declared here
int avpicture_fill(AVPicture *picture, const uint8_t *ptr,
^
video.cpp: In function ‘void openVideo(AVFormatContext*, AVStream*)’:
video.cpp:96:13: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
c = st->codec;
^
In file included from video.cpp:9:0:
./FFmpeg/libavformat/avformat.h:876:21: note: declared here
AVCodecContext *codec;
^
video.cpp:96:13: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
c = st->codec;
^
In file included from video.cpp:9:0:
./FFmpeg/libavformat/avformat.h:876:21: note: declared here
AVCodecContext *codec;
^
video.cpp:96:13: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
c = st->codec;
^
In file included from video.cpp:9:0:
./FFmpeg/libavformat/avformat.h:876:21: note: declared here
AVCodecContext *codec;
^
video.cpp: In function ‘AVStream* addVideoStream(AVFormatContext*, AVCodecID)’:
video.cpp:151:21: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
codec = stream->codec;
^
In file included from video.cpp:9:0:
./FFmpeg/libavformat/avformat.h:876:21: note: declared here
AVCodecContext *codec;
^
video.cpp:151:21: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
codec = stream->codec;
^
In file included from video.cpp:9:0:
./FFmpeg/libavformat/avformat.h:876:21: note: declared here
AVCodecContext *codec;
^
video.cpp:151:21: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
codec = stream->codec;
^
In file included from video.cpp:9:0:
./FFmpeg/libavformat/avformat.h:876:21: note: declared here
AVCodecContext *codec;
^
video.cpp: In function ‘void write_video_frame(AVFormatContext*, AVStream*)’:
video.cpp:202:13: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
c = st->codec;
^
In file included from video.cpp:9:0:
./FFmpeg/libavformat/avformat.h:876:21: note: declared here
AVCodecContext *codec;
^
video.cpp:202:13: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
c = st->codec;
^
In file included from video.cpp:9:0:
./FFmpeg/libavformat/avformat.h:876:21: note: declared here
AVCodecContext *codec;
^
video.cpp:202:13: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
c = st->codec;
^
In file included from video.cpp:9:0:
./FFmpeg/libavformat/avformat.h:876:21: note: declared here
AVCodecContext *codec;
^
video.cpp:256:20: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]
if (c->coded_frame->pts != AV_NOPTS_VALUE)
^
In file included from video.cpp:8:0:
./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here
attribute_deprecated AVFrame *coded_frame;
^
video.cpp:256:20: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]
if (c->coded_frame->pts != AV_NOPTS_VALUE)
^
In file included from video.cpp:8:0:
./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here
attribute_deprecated AVFrame *coded_frame;
^
video.cpp:256:20: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]
if (c->coded_frame->pts != AV_NOPTS_VALUE)
^
In file included from video.cpp:8:0:
./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here
attribute_deprecated AVFrame *coded_frame;
^
video.cpp:257:42: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]
pkt.pts= av_rescale_q(c->coded_frame->pts, c->time_base, st->time_base);
^
In file included from video.cpp:8:0:
./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here
attribute_deprecated AVFrame *coded_frame;
^
video.cpp:257:42: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]
pkt.pts= av_rescale_q(c->coded_frame->pts, c->time_base, st->time_base);
^
In file included from video.cpp:8:0:
./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here
attribute_deprecated AVFrame *coded_frame;
^
video.cpp:257:42: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]
pkt.pts= av_rescale_q(c->coded_frame->pts, c->time_base, st->time_base);
^
In file included from video.cpp:8:0:
./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here
attribute_deprecated AVFrame *coded_frame;
^
video.cpp:258:19: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]
if(c->coded_frame->key_frame)
^
In file included from video.cpp:8:0:
./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here
attribute_deprecated AVFrame *coded_frame;
^
video.cpp:258:19: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]
if(c->coded_frame->key_frame)
^
In file included from video.cpp:8:0:
./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here
attribute_deprecated AVFrame *coded_frame;
^
video.cpp:258:19: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]
if(c->coded_frame->key_frame)
^
In file included from video.cpp:8:0:
./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here
attribute_deprecated AVFrame *coded_frame;
^
video.cpp:357:40: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
av_freep(&context->streams[i]->codec);
^
In file included from video.cpp:9:0:
./FFmpeg/libavformat/avformat.h:876:21: note: declared here
AVCodecContext *codec;
^
video.cpp:357:40: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
av_freep(&context->streams[i]->codec);
^
In file included from video.cpp:9:0:
./FFmpeg/libavformat/avformat.h:876:21: note: declared here
AVCodecContext *codec;
^
video.cpp:357:40: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
av_freep(&context->streams[i]->codec);
^
In file included from video.cpp:9:0:
./FFmpeg/libavformat/avformat.h:876:21: note: declared here
AVCodecContext *codec;
^
video.cpp:337:38: warning: ignoring return value of ‘int avformat_write_header(AVFormatContext*, AVDictionary**)’, declared with attribute warn_unused_result [-Wunused-result]
avformat_write_header(context, 0);
^I have also defined a few macros to redefine those who have been omited. In a modern
ffmpeg
API, they must be replaced.Could someone please help me solving errors and deprecation warnings to comply with recent
ffmpeg
API ? -
Video creation with the most recent ffmpeg API (2017)
19 octobre 2022, par ar2015I have started learning how to work with
ffmpeg
which has a suffering deprecation of all tutorial and available examples such as this.


I am looking for a code which creates an output video.



Unfortunately, most of good examples are focusing on reading from a file rather than creating one.



Here, I have found a deprecated example and I spent a long time to fix its errors until it became like this :



#include <iostream>
#include 
#include 
#include <string>

extern "C" {
 #include <libavcodec></libavcodec>avcodec.h>
 #include <libavformat></libavformat>avformat.h>
 #include <libswscale></libswscale>swscale.h>
 #include <libavformat></libavformat>avio.h>
 #include <libavutil></libavutil>opt.h>
}

#define WIDTH 800
#define HEIGHT 480
#define STREAM_NB_FRAMES ((int)(STREAM_DURATION * FRAME_RATE))
#define FRAME_RATE 24
#define PIXEL_FORMAT AV_PIX_FMT_YUV420P
#define STREAM_DURATION 5.0 //seconds
#define BIT_RATE 400000

#define AV_CODEC_FLAG_GLOBAL_HEADER (1 << 22)
#define CODEC_FLAG_GLOBAL_HEADER AV_CODEC_FLAG_GLOBAL_HEADER
#define AVFMT_RAWPICTURE 0x0020

using namespace std;

static int sws_flags = SWS_BICUBIC;

AVFrame *picture, *tmp_picture;
uint8_t *video_outbuf;
int frame_count, video_outbuf_size;


/****** IF LINUX ******/
inline int sprintf_s(char* buffer, size_t sizeOfBuffer, const char* format, ...)
{
 va_list ap;
 va_start(ap, format);
 int result = vsnprintf(buffer, sizeOfBuffer, format, ap);
 va_end(ap);
 return result;
}

/****** IF LINUX ******/
template
inline int sprintf_s(char (&buffer)[sizeOfBuffer], const char* format, ...)
{
 va_list ap;
 va_start(ap, format);
 int result = vsnprintf(buffer, sizeOfBuffer, format, ap);
 va_end(ap);
 return result;
}


static void closeVideo(AVFormatContext *oc, AVStream *st)
{
 avcodec_close(st->codec);
 av_free(picture->data[0]);
 av_free(picture);
 if (tmp_picture)
 {
 av_free(tmp_picture->data[0]);
 av_free(tmp_picture);
 }
 av_free(video_outbuf);
}

static AVFrame *alloc_picture(enum AVPixelFormat pix_fmt, int width, int height)
{
 AVFrame *picture;
 uint8_t *picture_buf;
 int size;

 picture = av_frame_alloc();
 if(!picture)
 return NULL;
 size = avpicture_get_size(pix_fmt, width, height);
 picture_buf = (uint8_t*)(av_malloc(size));
 if (!picture_buf)
 {
 av_free(picture);
 return NULL;
 }
 avpicture_fill((AVPicture *) picture, picture_buf, pix_fmt, WIDTH, HEIGHT);
 return picture;
}

static void openVideo(AVFormatContext *oc, AVStream *st)
{
 AVCodec *codec;
 AVCodecContext *c;

 c = st->codec;
 if(c->idct_algo == AV_CODEC_ID_H264)
 av_opt_set(c->priv_data, "preset", "slow", 0);

 codec = avcodec_find_encoder(c->codec_id);
 if(!codec)
 {
 std::cout << "Codec not found." << std::endl;
 std::cin.get();std::cin.get();exit(1);
 }

 if(codec->id == AV_CODEC_ID_H264)
 av_opt_set(c->priv_data, "preset", "medium", 0);

 if(avcodec_open2(c, codec, NULL) < 0)
 {
 std::cout << "Could not open codec." << std::endl;
 std::cin.get();std::cin.get();exit(1);
 }
 video_outbuf = NULL;
 if(!(oc->oformat->flags & AVFMT_RAWPICTURE))
 {
 video_outbuf_size = 200000;
 video_outbuf = (uint8_t*)(av_malloc(video_outbuf_size));
 }
 picture = alloc_picture(c->pix_fmt, c->width, c->height);
 if(!picture)
 {
 std::cout << "Could not allocate picture" << std::endl;
 std::cin.get();exit(1);
 }
 tmp_picture = NULL;
 if(c->pix_fmt != AV_PIX_FMT_YUV420P)
 {
 tmp_picture = alloc_picture(AV_PIX_FMT_YUV420P, WIDTH, HEIGHT);
 if(!tmp_picture)
 {
 std::cout << " Could not allocate temporary picture" << std::endl;
 std::cin.get();exit(1);
 }
 }
}


static AVStream* addVideoStream(AVFormatContext *context, enum AVCodecID codecID)
{
 AVCodecContext *codec;
 AVStream *stream;
 stream = avformat_new_stream(context, NULL);
 if(!stream)
 {
 std::cout << "Could not alloc stream." << std::endl;
 std::cin.get();exit(1);
 }

 codec = stream->codec;
 codec->codec_id = codecID;
 codec->codec_type = AVMEDIA_TYPE_VIDEO;

 // sample rate
 codec->bit_rate = BIT_RATE;
 // resolution must be a multiple of two
 codec->width = WIDTH;
 codec->height = HEIGHT;
 codec->time_base.den = FRAME_RATE; // stream fps
 codec->time_base.num = 1;
 codec->gop_size = 12; // intra frame every twelve frames at most
 codec->pix_fmt = PIXEL_FORMAT;
 if(codec->codec_id == AV_CODEC_ID_MPEG2VIDEO)
 codec->max_b_frames = 2; // for testing, B frames

 if(codec->codec_id == AV_CODEC_ID_MPEG1VIDEO)
 codec->mb_decision = 2;

 if(context->oformat->flags & AVFMT_GLOBALHEADER)
 codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;

 return stream;
}

static void fill_yuv_image(AVFrame *pict, int frame_index, int width, int height)
{
 int x, y, i;
 i = frame_index;

 /* Y */
 for(y=0;ydata[0][y * pict->linesize[0] + x] = x + y + i * 3;
 }
 }

 /* Cb and Cr */
 for(y=0;y<height></height>2;y++) {
 for(x=0;x<width></width>2;x++) {
 pict->data[1][y * pict->linesize[1] + x] = 128 + y + i * 2;
 pict->data[2][y * pict->linesize[2] + x] = 64 + x + i * 5;
 }
 }
}

static void write_video_frame(AVFormatContext *oc, AVStream *st)
{
 int out_size, ret;
 AVCodecContext *c;
 static struct SwsContext *img_convert_ctx;
 c = st->codec;

 if(frame_count >= STREAM_NB_FRAMES)
 {

 }
 else
 {
 if(c->pix_fmt != AV_PIX_FMT_YUV420P)
 {
 if(img_convert_ctx = NULL)
 {
 img_convert_ctx = sws_getContext(WIDTH, HEIGHT, AV_PIX_FMT_YUV420P, WIDTH, HEIGHT,
 c->pix_fmt, sws_flags, NULL, NULL, NULL);
 if(img_convert_ctx == NULL)
 {
 std::cout << "Cannot initialize the conversion context" << std::endl;
 std::cin.get();exit(1);
 }
 }
 fill_yuv_image(tmp_picture, frame_count, WIDTH, HEIGHT);
 sws_scale(img_convert_ctx, tmp_picture->data, tmp_picture->linesize, 0, HEIGHT,
 picture->data, picture->linesize);
 }
 else
 {
 fill_yuv_image(picture, frame_count, WIDTH, HEIGHT);
 }
 }

 if (oc->oformat->flags & AVFMT_RAWPICTURE)
 {
 /* raw video case. The API will change slightly in the near
 futur for that */
 AVPacket pkt;
 av_init_packet(&pkt);

 pkt.flags |= AV_PKT_FLAG_KEY;
 pkt.stream_index= st->index;
 pkt.data= (uint8_t *)picture;
 pkt.size= sizeof(AVPicture);

 ret = av_interleaved_write_frame(oc, &pkt);
 }
 else
 {
 /* encode the image */
 out_size = avcodec_encode_video(c, video_outbuf, video_outbuf_size, picture);
 /* if zero size, it means the image was buffered */
 if (out_size > 0)
 {
 AVPacket pkt;
 av_init_packet(&pkt);

 if (c->coded_frame->pts != AV_NOPTS_VALUE)
 pkt.pts= av_rescale_q(c->coded_frame->pts, c->time_base, st->time_base);
 if(c->coded_frame->key_frame)
 pkt.flags |= AV_PKT_FLAG_KEY;
 pkt.stream_index= st->index;
 pkt.data= video_outbuf;
 pkt.size= out_size;
 /* write the compressed frame in the media file */
 ret = av_interleaved_write_frame(oc, &pkt);
 } else {
 ret = 0;
 }
 }
 if (ret != 0) {
 std::cout << "Error while writing video frames" << std::endl;
 std::cin.get();exit(1);
 }
 frame_count++;
}

int main ( int argc, char *argv[] )
{
 const char* filename = "test.h264";
 AVOutputFormat *outputFormat;
 AVFormatContext *context;
 AVCodecContext *codec;
 AVStream *videoStream;
 double videoPTS;

 // init libavcodec, register all codecs and formats
 av_register_all(); 
 // auto detect the output format from the name
 outputFormat = av_guess_format(NULL, filename, NULL);
 if(!outputFormat)
 {
 std::cout << "Cannot guess output format! Using mpeg!" << std::endl;
 std::cin.get();
 outputFormat = av_guess_format(NULL, "h263" , NULL);
 }
 if(!outputFormat)
 {
 std::cout << "Could not find suitable output format." << std::endl;
 std::cin.get();exit(1);
 }

 context = avformat_alloc_context();
 if(!context)
 {
 std::cout << "Cannot allocate avformat memory." << std::endl;
 std::cin.get();exit(1);
 }
 context->oformat = outputFormat;
 sprintf_s(context->filename, sizeof(context->filename), "%s", filename);
 std::cout << "Is '" << context->filename << "' = '" << filename << "'" << std::endl;


 videoStream = NULL;
 outputFormat->audio_codec = AV_CODEC_ID_NONE;
 videoStream = addVideoStream(context, outputFormat->video_codec);

 /* still needed?
 if(av_set_parameters(context, NULL) < 0)
 {
 std::cout << "Invalid output format parameters." << std::endl;
 exit(0);
 }*/

 av_dump_format(context, 0, filename, 1);

 if(videoStream)
 openVideo(context, videoStream);

 if(!outputFormat->flags & AVFMT_NOFILE)
 {
 if(avio_open(&context->pb, filename, AVIO_FLAG_READ_WRITE) < 0)
 {
 std::cout << "Could not open " << filename << std::endl;
 std::cin.get();exit(1);
 }
 }

 avformat_write_header(context, 0);

 while(true)
 {
 if(videoStream)
 videoPTS = (double) videoStream->pts.val * videoStream->time_base.num / videoStream->time_base.den;
 else
 videoPTS = 0.;

 if((!videoStream || videoPTS >= STREAM_DURATION))
 {
 break;
 }
 write_video_frame(context, videoStream);
 }
 av_write_trailer(context);
 if(videoStream)
 closeVideo(context, videoStream);
 for(int i = 0; i < context->nb_streams; i++)
 {
 av_freep(&context->streams[i]->codec);
 av_freep(&context->streams[i]);
 }

 if(!(outputFormat->flags & AVFMT_NOFILE))
 {
 avio_close(context->pb);
 }
 av_free(context);
 std::cin.get();
 return 0;
}
</string></iostream>



Compile :



g++ -I ./FFmpeg/ video.cpp -L fflibs -lavcodec -lavformat




The code comes with two errors :



video.cpp:249:84: error: ‘avcodec_encode_video’ was not declared in this scope
 out_size = avcodec_encode_video(c, video_outbuf, video_outbuf_size, picture);
 ^


video.cpp: In function ‘int main(int, char**)’:
video.cpp:342:46: error: ‘AVStream {aka struct AVStream}’ has no member named ‘pts’
 videoPTS = (double) videoStream->pts.val * videoStream->time_base.num / videoStream->time_base.den;
 ^




and a huge number of warnings for deprecation.



video.cpp: In function ‘void closeVideo(AVFormatContext*, AVStream*)’:
video.cpp:60:23: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
 avcodec_close(st->codec);
 ^
In file included from video.cpp:9:0:
./FFmpeg/libavformat/avformat.h:876:21: note: declared here
 AVCodecContext *codec;
 ^
video.cpp:60:23: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
 avcodec_close(st->codec);
 ^
In file included from video.cpp:9:0:
./FFmpeg/libavformat/avformat.h:876:21: note: declared here
 AVCodecContext *codec;
 ^
video.cpp:60:23: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
 avcodec_close(st->codec);
 ^
In file included from video.cpp:9:0:
./FFmpeg/libavformat/avformat.h:876:21: note: declared here
 AVCodecContext *codec;
 ^
video.cpp: In function ‘AVFrame* alloc_picture(AVPixelFormat, int, int)’:
video.cpp:80:12: warning: ‘int avpicture_get_size(AVPixelFormat, int, int)’ is deprecated [-Wdeprecated-declarations]
 size = avpicture_get_size(pix_fmt, width, height);
 ^
In file included from video.cpp:8:0:
./FFmpeg/libavcodec/avcodec.h:5228:5: note: declared here
 int avpicture_get_size(enum AVPixelFormat pix_fmt, int width, int height);
 ^
video.cpp:80:12: warning: ‘int avpicture_get_size(AVPixelFormat, int, int)’ is deprecated [-Wdeprecated-declarations]
 size = avpicture_get_size(pix_fmt, width, height);
 ^
In file included from video.cpp:8:0:
./FFmpeg/libavcodec/avcodec.h:5228:5: note: declared here
 int avpicture_get_size(enum AVPixelFormat pix_fmt, int width, int height);
 ^
video.cpp:80:53: warning: ‘int avpicture_get_size(AVPixelFormat, int, int)’ is deprecated [-Wdeprecated-declarations]
 size = avpicture_get_size(pix_fmt, width, height);
 ^
In file included from video.cpp:8:0:
./FFmpeg/libavcodec/avcodec.h:5228:5: note: declared here
 int avpicture_get_size(enum AVPixelFormat pix_fmt, int width, int height);
 ^
video.cpp:87:5: warning: ‘int avpicture_fill(AVPicture*, const uint8_t*, AVPixelFormat, int, int)’ is deprecated [-Wdeprecated-declarations]
 avpicture_fill((AVPicture *) picture, picture_buf, pix_fmt, WIDTH, HEIGHT);
 ^
In file included from video.cpp:8:0:
./FFmpeg/libavcodec/avcodec.h:5213:5: note: declared here
 int avpicture_fill(AVPicture *picture, const uint8_t *ptr,
 ^
video.cpp:87:5: warning: ‘int avpicture_fill(AVPicture*, const uint8_t*, AVPixelFormat, int, int)’ is deprecated [-Wdeprecated-declarations]
 avpicture_fill((AVPicture *) picture, picture_buf, pix_fmt, WIDTH, HEIGHT);
 ^
In file included from video.cpp:8:0:
./FFmpeg/libavcodec/avcodec.h:5213:5: note: declared here
 int avpicture_fill(AVPicture *picture, const uint8_t *ptr,
 ^
video.cpp:87:78: warning: ‘int avpicture_fill(AVPicture*, const uint8_t*, AVPixelFormat, int, int)’ is deprecated [-Wdeprecated-declarations]
 avpicture_fill((AVPicture *) picture, picture_buf, pix_fmt, WIDTH, HEIGHT);
 ^
In file included from video.cpp:8:0:
./FFmpeg/libavcodec/avcodec.h:5213:5: note: declared here
 int avpicture_fill(AVPicture *picture, const uint8_t *ptr,
 ^
video.cpp: In function ‘void openVideo(AVFormatContext*, AVStream*)’:
video.cpp:96:13: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
 c = st->codec;
 ^
In file included from video.cpp:9:0:
./FFmpeg/libavformat/avformat.h:876:21: note: declared here
 AVCodecContext *codec;
 ^
video.cpp:96:13: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
 c = st->codec;
 ^
In file included from video.cpp:9:0:
./FFmpeg/libavformat/avformat.h:876:21: note: declared here
 AVCodecContext *codec;
 ^
video.cpp:96:13: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
 c = st->codec;
 ^
In file included from video.cpp:9:0:
./FFmpeg/libavformat/avformat.h:876:21: note: declared here
 AVCodecContext *codec;
 ^
video.cpp: In function ‘AVStream* addVideoStream(AVFormatContext*, AVCodecID)’:
video.cpp:151:21: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
 codec = stream->codec;
 ^
In file included from video.cpp:9:0:
./FFmpeg/libavformat/avformat.h:876:21: note: declared here
 AVCodecContext *codec;
 ^
video.cpp:151:21: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
 codec = stream->codec;
 ^
In file included from video.cpp:9:0:
./FFmpeg/libavformat/avformat.h:876:21: note: declared here
 AVCodecContext *codec;
 ^
video.cpp:151:21: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
 codec = stream->codec;
 ^
In file included from video.cpp:9:0:
./FFmpeg/libavformat/avformat.h:876:21: note: declared here
 AVCodecContext *codec;
 ^
video.cpp: In function ‘void write_video_frame(AVFormatContext*, AVStream*)’:
video.cpp:202:13: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
 c = st->codec;
 ^
In file included from video.cpp:9:0:
./FFmpeg/libavformat/avformat.h:876:21: note: declared here
 AVCodecContext *codec;
 ^
video.cpp:202:13: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
 c = st->codec;
 ^
In file included from video.cpp:9:0:
./FFmpeg/libavformat/avformat.h:876:21: note: declared here
 AVCodecContext *codec;
 ^
video.cpp:202:13: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
 c = st->codec;
 ^
In file included from video.cpp:9:0:
./FFmpeg/libavformat/avformat.h:876:21: note: declared here
 AVCodecContext *codec;
 ^
video.cpp:256:20: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]
 if (c->coded_frame->pts != AV_NOPTS_VALUE)
 ^
In file included from video.cpp:8:0:
./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here
 attribute_deprecated AVFrame *coded_frame;
 ^
video.cpp:256:20: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]
 if (c->coded_frame->pts != AV_NOPTS_VALUE)
 ^
In file included from video.cpp:8:0:
./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here
 attribute_deprecated AVFrame *coded_frame;
 ^
video.cpp:256:20: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]
 if (c->coded_frame->pts != AV_NOPTS_VALUE)
 ^
In file included from video.cpp:8:0:
./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here
 attribute_deprecated AVFrame *coded_frame;
 ^
video.cpp:257:42: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]
 pkt.pts= av_rescale_q(c->coded_frame->pts, c->time_base, st->time_base);
 ^
In file included from video.cpp:8:0:
./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here
 attribute_deprecated AVFrame *coded_frame;
 ^
video.cpp:257:42: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]
 pkt.pts= av_rescale_q(c->coded_frame->pts, c->time_base, st->time_base);
 ^
In file included from video.cpp:8:0:
./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here
 attribute_deprecated AVFrame *coded_frame;
 ^
video.cpp:257:42: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]
 pkt.pts= av_rescale_q(c->coded_frame->pts, c->time_base, st->time_base);
 ^
In file included from video.cpp:8:0:
./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here
 attribute_deprecated AVFrame *coded_frame;
 ^
video.cpp:258:19: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]
 if(c->coded_frame->key_frame)
 ^
In file included from video.cpp:8:0:
./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here
 attribute_deprecated AVFrame *coded_frame;
 ^
video.cpp:258:19: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]
 if(c->coded_frame->key_frame)
 ^
In file included from video.cpp:8:0:
./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here
 attribute_deprecated AVFrame *coded_frame;
 ^
video.cpp:258:19: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]
 if(c->coded_frame->key_frame)
 ^
In file included from video.cpp:8:0:
./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here
 attribute_deprecated AVFrame *coded_frame;
 ^
video.cpp:357:40: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
 av_freep(&context->streams[i]->codec);
 ^
In file included from video.cpp:9:0:
./FFmpeg/libavformat/avformat.h:876:21: note: declared here
 AVCodecContext *codec;
 ^
video.cpp:357:40: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
 av_freep(&context->streams[i]->codec);
 ^
In file included from video.cpp:9:0:
./FFmpeg/libavformat/avformat.h:876:21: note: declared here
 AVCodecContext *codec;
 ^
video.cpp:357:40: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
 av_freep(&context->streams[i]->codec);
 ^
In file included from video.cpp:9:0:
./FFmpeg/libavformat/avformat.h:876:21: note: declared here
 AVCodecContext *codec;
 ^
video.cpp:337:38: warning: ignoring return value of ‘int avformat_write_header(AVFormatContext*, AVDictionary**)’, declared with attribute warn_unused_result [-Wunused-result]
 avformat_write_header(context, 0);
 ^




I have also defined a few macros to redefine those who have been omited. In a modern
ffmpeg
API, they must be replaced.


Could someone please help me solving errors and deprecation warnings to comply with recent
ffmpeg
API ?