
Recherche avancée
Médias (1)
-
Collections - Formulaire de création rapide
19 février 2013, par
Mis à jour : Février 2013
Langue : français
Type : Image
Autres articles (40)
-
Other interesting software
13 avril 2011, parWe don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
We don’t know them, we didn’t try them, but you can take a peek.
Videopress
Website : http://videopress.com/
License : GNU/GPL v2
Source code : (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (7170)
-
Revision 32596 : minuscules et fin pour aujourd’hui
1er novembre 2009, par fil@… — Logminuscules et fin pour aujourd’hui
-
What am I doing wrong ? Tweepy with ffmpeg
27 août 2020, par pigeonburgerI'm trying to get this code to pull the media from any tweet that mentions my twitter handle, convert it using ffmpeg via the subprocess module, then send the converted media back to the person as a reply ? Is this all correct ?

I am also getting an error at
tweet_media = clean_data['entities']['media']['media_url']
and I don't understand what I'm doing wrong there (Exception has occurred : TypeError
list indices must be integers or slices, not str
line 32, in on_data
tweet_media = clean_data['entities']['media']['media_url'])

Also is there a better way to use ffmpeg with python that I am not aware of ?


Here is the code I wrote that I'm trying to use :


import tweepy
from tweepy import Stream
from tweepy.streaming import StreamListener
from datetime import datetime
import time
import subprocess

stdout = subprocess.PIPE
def runcmd(cmd):
 x = subprocess.Popen(cmd, stdout=subprocess.PIPE)
 return x.communicate(stdout)

print(" TWITTER BOT")
time.sleep(1.5)
print(" By PigeonBurger, updated 26 August 2020 \n")

import json
import random

class StdOutListener(StreamListener):
 def on_data(self, data):
 clean_data = json.loads(data)
 tweetId = clean_data['id']
 tweet_name = clean_data['user']['screen_name']
 tweet_media = clean_data['entities']['media']['media_url']
 tweet_photo = runcmd('ffmpeg -i tweet_media output.jpg')
 print(clean_data)
 tweet = 'Here ya go'
 now = datetime.now()
 dt_string = now.strftime("%d/%m/%Y %H:%M:%S")
 print(' Reply sent to @'+tweet_name, 'on', dt_string, '\n' ' Message:', tweet, '\n')
 respondToTweet(tweet_photo, tweet, tweetId)

def setUpAuth():
 auth = tweepy.OAuthHandler("consumer_key", "consumer_secret")
 auth.set_access_token("access_token", "access_token_secret")
 api = tweepy.API(auth)
 return api, auth

def followStream():
 api, auth = setUpAuth()
 listener = StdOutListener()
 stream = Stream(auth, listener)
 stream.filter(track=["@YOUR_TWITTER_HANDLE"], is_async=True)

def respondToTweet(tweet_photo, tweet, tweetId):
 api, auth = setUpAuth()
 api.update_with_media(tweet_photo, tweet, in_reply_to_status_id=tweetId, auto_populate_reply_metadata=True, stall_warnings=True)

if __name__ == "__main__":
 followStream()



-
How to get frames from HDR video in scRGB color space ?
5 mars 2018, par Виталий СинявскийI want to create a simple video player that will show HDR video on HDR TV. For example, this "LG Chess HDR" video. It is encoded with HEVC, its bit depth is 10 bit, pixel format is YUV420P10LE and it has metadata abount BT2020 color space and PQ transfer function.
In this NVIDIA article I found the next :
The display driver takes the scRGB back buffer, and converts it to the
standard expected by the display presently connected. In general, this
means converting the color space from sRGB primaries to BT. 2020
primaries, scaling to an appropriate level, and encoding with a
mechanism like PQ. Also, possibly performing conversions like RGB to
YCC if that display connection requires it.It means that my player should render pixels in the scRGB color space (linear encoding, sRGB primaries, full range is -0.5 through just less than +7.5). So I need to get frames from the source video in this color space somehow, preferably in FP16 pixel format (half float, 16 bits per one color channel). I come to the following simple pipeline to render videos to HDR :
source HDR video in BT2020 color space with applied PQ -> [some video library] ->
-> video frames with colors in scRGB color space -> [my program] ->
-> rendered video on HDR TV with applied conversions by display driverI’m trying to use FFmpeg as this library and do not understand how to get frames from the source HDR video in scRGB color space.
I use sws_scale FFmpeg method now to get frames and know about filters API. But I did not found any information and help about how to transparantly get frames in scRGB using these functionality without parsing metadata for all source videos and create custom video filters for them.
Please, tell me what I can do to get frames in the scRGB color space using FFmpeg. Can someone tell other libraries with which I can do it ?