
Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (96)
-
Participer à sa documentation
10 avril 2011La documentation est un des travaux les plus importants et les plus contraignants lors de la réalisation d’un outil technique.
Tout apport extérieur à ce sujet est primordial : la critique de l’existant ; la participation à la rédaction d’articles orientés : utilisateur (administrateur de MediaSPIP ou simplement producteur de contenu) ; développeur ; la création de screencasts d’explication ; la traduction de la documentation dans une nouvelle langue ;
Pour ce faire, vous pouvez vous inscrire sur (...) -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Contribute to a better visual interface
13 avril 2011MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.
Sur d’autres sites (4312)
-
FFmpeg, how to skip late input ?
14 novembre 2017, par user3343357I’m running ffmpeg to display incoming stream on a Decklink BlackMagic card with the following command line :
ffmpeg -y -f ourFmt -probesize 32 -i - -f decklink -preset ultrafast
-pix_fmt uyvy422 -s 1920x1080 -r 30 -af volume=0.1 -max_delay 10000
DeckLink Mini MonitorBasically I get the video over the internet by UDP and stream it to ffmpeg stdin. Both audio and video streams have pts and dts and are fully in sync, if the connection is good there is no problems.
However if there are issues with the connection i start getting errors, sometimes the video delay grows significantly, and audio stops working.
The errors i get are :ffmpeg : [decklink @ 0x26cc600] There are not enough buffered video
frames. Video may misbehave ! ffmpeg : [decklink @ 0x26cc600] There’s no
buffered audio. Audio will misbehave ! ffmpeg : Last message
repeated 4 times ffmpeg : [decklink @ 0x26cc600] There are not enough
buffered video frames. Video may misbehave ! ffmpeg : [decklink @
0x26cc600] There’s no buffered audio. Audio will misbehave ! ffmpeg :
Last message repeated 3 times ffmpeg : frame= 5204 fps= 30 q=-0.0
size=N/A time=00:02:53.76 bitrate=N/A dup=385 drop=5 speed=0.993x
ffmpeg : [decklink @ 0x26cc600] There’s no buffered audio. Audio will
misbehave ! ffmpeg : Last message repeated 18 times ffmpeg :
[decklink @ 0x26cc600] There are not enough buffered video frames.
Video may misbehave ! ffmpeg : [decklink @ 0x26cc600] There’s no
buffered audio. Audio will misbehave !The problem is when the connection is back to normal, the video keeps misbehaving until I restart the stream. What I want to do is for FFmpeg to skip to the content of the last second and play synchronized video from there, drop all the late data in between, is it possible ?
-
Fully GPU accelerated (decoding,deinterlacing,scaling,encoding) HLS variable stream with ffmpeg
6 juillet 2021, par Milan ČížekI'm trying to create a variable HLS MBR live stream using ffmpeg, which will be fully accelerated at the GPU level. This means accelerated decoding, deinterlacing, scaling and encoding. Here is my broken example ...


ffmpeg -loglevel debug -hwaccel cuvid -c:v h264_cuvid -hwaccel_output_format cuda -vsync 0 -i "udp://@239.250.4.152:1234?fifo_size=1000000&overrun_nonfatal=1" \
-filter_complex "[0:v]yadif_cuda=0:-1:0,split=3[v1][v2][v3],[v1]copy[v1out],[v2]scale_npp=1280:720[v2out],[v3]scale_npp=720:405[v3out]" \
-map [v1out] -c:v:0 hevc_nvenc -b:v:0 4000k -g 48 \
-map [v2out] -c:v:1 hevc_nvenc -b:v:0 3000k -g 48 \
-map [v3out] -c:v:2 hevc_nvenc -b:v:0 2000k -g 48 \
-map a:0 -c:a:0 aac -b:a:0 128k -ac 2 \
-map a:0 -c:a:1 aac -b:a:1 96k -ac 2 \
-map a:0 -c:a:2 aac -b:a:2 64k -ac 2 \
-f hls \
-hls_playlist_type event \
-hls_segment_type mpegts \
-hls_time $seglen \
-hls_list_size $numsegs \
-hls_flags delete_segments+independent_segments \
-hls_segment_filename "$dst/stream_%v/$segments" \
-hls_base_url "$url" \
-master_pl_name "$dst/$index" \
-var_stream_map "v:0,a:0 v:1,a:1 v:2,a:2" \
"$dst/$index"



Note : My graphics card can handle more than 2 concurrent encodings.
I'm getting a classic error "Impossible to convert between the formats supported by the filter 'Parsed_split_1' and the filter 'auto_scaler_0'".


Is my goal real ? Or what is the proper way to use the GPU in this scenario as efficiently as possible ? Thanks for the help.


Stream mapping:
 Stream # 0: 3 (h264_cuvid) -> yadif_cuda (graph 0)
 copy (graph 0) -> Stream # 0: 0 (h264_nvenc)
 scale_npp (graph 0) -> Stream # 0: 1 (h264_nvenc)
 scale_npp (graph 0) -> Stream # 0: 2 (h264_nvenc)
 Stream # 0: 4 -> # 0: 3 (ac3 (native) -> aac (native))
 Stream # 0: 4 -> # 0: 4 (ac3 (native) -> aac (native))
 Stream # 0: 4 -> # 0: 5 (ac3 (native) -> aac (native))



-
.mp4 file is not playing in Django template and FireFox or Chrome
9 février 2021, par Himanshu sharmaI can save live RTSP video stream in .mp4 but when I run saved .mp4 video in Django template or Firefox or Chrome browser or VLC, video is not playing in ubuntu.
I think I have a compatible issue problem in .mp4. Furthermore, I want to show and play .mp4 saved file in Django.


I have a two IP camera which provides a live RTSP video stream.


self.input_stream---> rtsp://admin:Admin123@192.168.1.208/user=admin_password=Admin123_channel=0channel_number_stream=0.sdp

self.input_stream---> rtsp://Admin:@192.168.1.209/user=Admin_password=_channel=0channel_number_stream=0.sdp



Python 3.8.5 (default, Jul 28 2020, 12:59:40)
[GCC 9.3.0] on linux,


Django==3.1.2


I am implementing this code in difference Ubuntu PCs


Ubuntu = 18.04 and 20.04


opencv-contrib-python==4.4.0.46


opencv-python==4.4.0.46


ffmpeg version 4.2.4-1ubuntu0.1 Copyright (c) 2000-2020 the FFmpeg developers
built with gcc 9 (Ubuntu 9.3.0-10ubuntu2)


I had tried different fourcc to save .mp4 but below fourcc work perfectly in Ubuntu.


fourcc = cv2.VideoWriter_fourcc(*'mp4v')


class ffmpegStratStreaming(threading.Thread):
 def __init__(self, input_stream=None, output_stream=None, camera=None, *args, **kwargs):
 self.input_stream = input_stream
 self.output_stream = output_stream
 self.camera = camera
 super().__init__(*args, **kwargs)

 def run(self):
 try:vs = cv2.VideoCapture(self.input_stream)
 fps_rate = int(vs.get(cv2.CAP_PROP_FPS))
 ##############################
 ############################## 
 # ~ print('fps rate-->', fps_rate,'camera id-->' ,str(self.camera.id))
 # ~ vs.set(cv2.CAP_PROP_POS_FRAMES,50) #Set the frame number to be obtained
 # ~ print('fps rate-->', fps_rate,'camera id-->' ,str(self.camera.id),' ####### ')
 
 # initialize the video writer (we'll instantiate later if need be)
 writer = None

 # initialize the frame dimensions (we'll set them as soon as we read
 # the first frame from the video)
 W = None
 H = None 
 # start the frames per second throughput estimator
 fps = FPS().start()

 # saving frame in avi video format
 video_file_count = 0
 start_time = time.time()

 while True:
 try:
 # grab the next frame and handle if we are reading from either
 # VideoCapture or VideoStream
 
 frame_init = vs.read()
 frame = frame_init[1] if self.input_stream else frame_init
 
 # if frame is can't read correctly ret is False
 while frame_init[0] == False:
 print("Can't receive frame. Retrying ...")
 vs.release()
 vs = cv2.VideoCapture(self.input_stream) 
 frame_init = vs.read()
 frame = frame_init[1] if self.input_stream else frame_init

 # if we are viewing a video and we did not grab a frame then we
 # have reached the end of the video
 if self.input_stream is not None and frame is None:
 break

 # resize the frame to have a maximum width of 500 pixels (the
 # less data we have, the faster we can process it), then convert
 # the frame from BGR to RGB for dlib
 frame = imutils.resize(frame, width=500) 
 
 
 #<---------------------- Start of writing a video to disk -------------------------> 
 minute = 1
 second = 60
 minite_to_save_video = int(minute) * int(second)

 
 # if we are supposed to be writing a video to disk, initialize
 if time.time() - start_time >= minite_to_save_video or video_file_count == 0 :
 ## where H = heigth, W = width, C = channel 
 H, W, C = frame.shape
 video_file_count += 1
 start_time = time.time()
 output_save_directory = self.output_stream+str(video_file_count)+'.mp4'
 # fourcc = cv2.VideoWriter_fourcc(*"MJPG")
 # fourcc = cv2.VideoWriter_fourcc(*'XVID')
 # fourcc = cv2.VideoWriter_fourcc(*'X264')
 # fourcc = cv2.VideoWriter_fourcc(*'MP4V')

 
 # fourcc = cv2.VideoWriter_fourcc(*'FMP4')

 # fourcc = cv2.VideoWriter_fourcc(*'avc1')


 # fourcc = cv2.VideoWriter_fourcc('M','J','P','G')

 fourcc = cv2.VideoWriter_fourcc(*'mp4v')
 # fourcc = cv2.VideoWriter_fourcc(*'mp3v')
 # fourcc = 0x00000021
 print(fourcc, type(fourcc),'ffffffff')
 # a = int(vs.get(cv2.CAP_PROP_FOURCC))
 # print(a,type(a),' aaaaaaaa' )

 # writer = skvideo.io.FFmpegWriter(output_save_directory, outputdict={
 # '-vcodec': 'libx264', '-b': '300000000'
 # })
 
 # writer = skvideo.io.FFmpegWriter(self.output_stream, outputdict={'-r': '120', '-c:v': 'libx264', '-crf': '0', '-preset': 'ultrafast', '-pix_fmt': 'yuv444p'})
 
 writer = cv2.VideoWriter(output_save_directory, fourcc ,20.0,( int(W), int(H) ), True)
 
 
 # ~ The cv2.VideoWriter requires five parameters: 

 # check to see if we should write the frame to disk
 if writer is not None: 
 try:
 writer.write(frame)
 except Exception as e:
 print('Error in writing video output---> ', e)
 
 #<---------------------- end of writing a video to disk ------------------------->

 # show the output frame
 # cv2.imshow("Frame", frame)
 # key = cv2.waitKey(1) & 0xFF

 # if the `q` key was pressed, break from the loop
 # if key == ord("q"):
 # break

 # increment the total number of frames processed thus far and
 # then update the FPS counter
 totalFrames += 1
 fps.update()
 except Exception as e:
 print('Error in main while loop--> ', e)

 # stop the timer and display FPS information
 fps.stop()

 # check to see if we need to release the video writer pointer
 # if writer is not None:
 # writer.release()

 # if we are not using a video file, stop the camera video stream
 # if not self.input_stream:
 # vs.stop()

 # otherwise, release the video file pointer
 # else:
 # vs.release()

 # close any open windows
 # cv2.destroyAllWindows()
 except Exception as e:
 print(e, '333333333333333333333333333')



My Django template code


{% block main %}
<div class="row">
 {% if folders|length == 0 %}
 <div class="col-md-12 text-center">
 <h6 class="card-title">There are no files in this directory.</h6>
 </div>
 {% else %}
 {% for folder in folders %}
 <div class="col-md-3 text-center">
 <div class="card-block">
 <video class="video-js vjs-fluid vjs-default-skin" controls="controls" preload="auto" muted="muted" data-setup="'{" true="true"></video>
 <source src="{% get_media_prefix %}camera-feed/video-saved/{{pk}}/{{parent}}/{{folder}}" type="video/mp4">
 
 <h6 class="card-title">{{folder}}</h6>
 </source></div>
 </div>
 {% endfor %}
 {% endif %}
</div>
{% endblock %}



Function in view.py to see saved .mp4 video in Django template. Video is saved at local folder.


def CameraVideos(request, pk, folder):
 dirpath = settings.MEDIA_ROOT + '/camera-feed/video-saved/' + str(pk) + '/' + folder
 folders = sorted(Path(dirpath).iterdir(), key=os.path.getmtime)
 data = []
 for file in folders:
 if not file.name.startswith('.'):
 data.append(file.name)
 return render(request, "home/camera_video.html", {'folders': data, 'pk': pk, 'parent': folder})