Probabilistic losses - Keras

文章推薦指數: 80 %
投票人數:10人

Computes the cross-entropy loss between true labels and predicted labels. Use this cross-entropy loss for binary (0 or 1) classification applications. Star AboutKeras Gettingstarted Developerguides KerasAPIreference ModelsAPI LayersAPI CallbacksAPI Optimizers Metrics Losses Dataloading Built-insmalldatasets KerasApplications Mixedprecision Utilities KerasTuner KerasCV KerasNLP Codeexamples WhychooseKeras? Community&governance ContributingtoKeras KerasTuner KerasCV KerasNLP search » KerasAPIreference/ Losses/ Probabilisticlosses Probabilisticlosses [source] BinaryCrossentropyclass tf.keras.losses.BinaryCrossentropy( from_logits=False, label_smoothing=0.0, axis=-1, reduction="auto", name="binary_crossentropy", ) Computesthecross-entropylossbetweentruelabelsandpredictedlabels. Usethiscross-entropylossforbinary(0or1)classificationapplications. Thelossfunctionrequiresthefollowinginputs: y_true(truelabel):Thisiseither0or1. y_pred(predictedvalue):Thisisthemodel'sprediction,i.e,asingle floating-pointvaluewhicheitherrepresentsa logit,(i.e,valuein[-inf,inf] whenfrom_logits=True)oraprobability(i.e,valuein[0.,1.]when from_logits=False). RecommendedUsage:(setfrom_logits=True) Withtf.kerasAPI: model.compile( loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), .... ) Asastandalonefunction: >>>#Example1:(batch_size=1,numberofsamples=4) >>>y_true=[0,1,0,0] >>>y_pred=[-18.6,0.51,2.94,-12.8] >>>bce=tf.keras.losses.BinaryCrossentropy(from_logits=True) >>>bce(y_true,y_pred).numpy() 0.865 >>>#Example2:(batch_size=2,numberofsamples=4) >>>y_true=[[0,1],[0,0]] >>>y_pred=[[-18.6,0.51],[2.94,-12.8]] >>>#Usingdefault'auto'/'sum_over_batch_size'reductiontype. >>>bce=tf.keras.losses.BinaryCrossentropy(from_logits=True) >>>bce(y_true,y_pred).numpy() 0.865 >>>#Using'sample_weight'attribute >>>bce(y_true,y_pred,sample_weight=[0.8,0.2]).numpy() 0.243 >>>#Using'sum'reduction`type. >>>bce=tf.keras.losses.BinaryCrossentropy(from_logits=True, ...reduction=tf.keras.losses.Reduction.SUM) >>>bce(y_true,y_pred).numpy() 1.730 >>>#Using'none'reductiontype. >>>bce=tf.keras.losses.BinaryCrossentropy(from_logits=True, ...reduction=tf.keras.losses.Reduction.NONE) >>>bce(y_true,y_pred).numpy() array([0.235,1.496],dtype=float32) DefaultUsage:(setfrom_logits=False) >>>#Makethefollowingupdatestotheabove"RecommendedUsage"section >>>#1.Set`from_logits=False` >>>tf.keras.losses.BinaryCrossentropy()#OR...('from_logits=False') >>>#2.Update`y_pred`touseprobabilitiesinsteadoflogits >>>y_pred=[0.6,0.3,0.2,0.8]#OR[[0.6,0.3],[0.2,0.8]] [source] CategoricalCrossentropyclass tf.keras.losses.CategoricalCrossentropy( from_logits=False, label_smoothing=0.0, axis=-1, reduction="auto", name="categorical_crossentropy", ) Computesthecrossentropylossbetweenthelabelsandpredictions. Usethiscrossentropylossfunctionwhentherearetwoormorelabel classes.Weexpectlabelstobeprovidedinaone_hotrepresentation.If youwanttoprovidelabelsasintegers,pleaseuse SparseCategoricalCrossentropyloss.Thereshouldbe#classesfloating pointvaluesperfeature. Inthesnippetbelow,thereis#classesfloatingpointingvaluesper example.Theshapeofbothy_predandy_trueare [batch_size,num_classes]. Standaloneusage: >>>y_true=[[0,1,0],[0,0,1]] >>>y_pred=[[0.05,0.95,0],[0.1,0.8,0.1]] >>>#Using'auto'/'sum_over_batch_size'reductiontype. >>>cce=tf.keras.losses.CategoricalCrossentropy() >>>cce(y_true,y_pred).numpy() 1.177 >>>#Callingwith'sample_weight'. >>>cce(y_true,y_pred,sample_weight=tf.constant([0.3,0.7])).numpy() 0.814 >>>#Using'sum'reductiontype. >>>cce=tf.keras.losses.CategoricalCrossentropy( ...reduction=tf.keras.losses.Reduction.SUM) >>>cce(y_true,y_pred).numpy() 2.354 >>>#Using'none'reductiontype. >>>cce=tf.keras.losses.CategoricalCrossentropy( ...reduction=tf.keras.losses.Reduction.NONE) >>>cce(y_true,y_pred).numpy() array([0.0513,2.303],dtype=float32) Usagewiththecompile()API: model.compile(optimizer='sgd', loss=tf.keras.losses.CategoricalCrossentropy()) [source] SparseCategoricalCrossentropyclass tf.keras.losses.SparseCategoricalCrossentropy( from_logits=False, ignore_class=None, reduction="auto", name="sparse_categorical_crossentropy", ) Computesthecrossentropylossbetweenthelabelsandpredictions. Usethiscrossentropylossfunctionwhentherearetwoormorelabel classes.Weexpectlabelstobeprovidedasintegers.Ifyouwantto providelabelsusingone-hotrepresentation,pleaseuse CategoricalCrossentropyloss.Thereshouldbe#classesfloatingpoint valuesperfeaturefory_predandasinglefloatingpointvalueper featurefory_true. Inthesnippetbelow,thereisasinglefloatingpointvalueperexamplefor y_trueand#classesfloatingpointingvaluesperexamplefory_pred. Theshapeofy_trueis[batch_size]andtheshapeofy_predis [batch_size,num_classes]. Standaloneusage: >>>y_true=[1,2] >>>y_pred=[[0.05,0.95,0],[0.1,0.8,0.1]] >>>#Using'auto'/'sum_over_batch_size'reductiontype. >>>scce=tf.keras.losses.SparseCategoricalCrossentropy() >>>scce(y_true,y_pred).numpy() 1.177 >>>#Callingwith'sample_weight'. >>>scce(y_true,y_pred,sample_weight=tf.constant([0.3,0.7])).numpy() 0.814 >>>#Using'sum'reductiontype. >>>scce=tf.keras.losses.SparseCategoricalCrossentropy( ...reduction=tf.keras.losses.Reduction.SUM) >>>scce(y_true,y_pred).numpy() 2.354 >>>#Using'none'reductiontype. >>>scce=tf.keras.losses.SparseCategoricalCrossentropy( ...reduction=tf.keras.losses.Reduction.NONE) >>>scce(y_true,y_pred).numpy() array([0.0513,2.303],dtype=float32) Usagewiththecompile()API: model.compile(optimizer='sgd', loss=tf.keras.losses.SparseCategoricalCrossentropy()) [source] Poissonclass tf.keras.losses.Poisson(reduction="auto",name="poisson") ComputesthePoissonlossbetweeny_trueandy_pred. loss=y_pred-y_true*log(y_pred) Standaloneusage: >>>y_true=[[0.,1.],[0.,0.]] >>>y_pred=[[1.,1.],[0.,0.]] >>>#Using'auto'/'sum_over_batch_size'reductiontype. >>>p=tf.keras.losses.Poisson() >>>p(y_true,y_pred).numpy() 0.5 >>>#Callingwith'sample_weight'. >>>p(y_true,y_pred,sample_weight=[0.8,0.2]).numpy() 0.4 >>>#Using'sum'reductiontype. >>>p=tf.keras.losses.Poisson( ...reduction=tf.keras.losses.Reduction.SUM) >>>p(y_true,y_pred).numpy() 0.999 >>>#Using'none'reductiontype. >>>p=tf.keras.losses.Poisson( ...reduction=tf.keras.losses.Reduction.NONE) >>>p(y_true,y_pred).numpy() array([0.999,0.],dtype=float32) Usagewiththecompile()API: model.compile(optimizer='sgd',loss=tf.keras.losses.Poisson()) [source] binary_crossentropyfunction tf.keras.losses.binary_crossentropy( y_true,y_pred,from_logits=False,label_smoothing=0.0,axis=-1 ) Computesthebinarycrossentropyloss. Standaloneusage: >>>y_true=[[0,1],[0,0]] >>>y_pred=[[0.6,0.4],[0.4,0.6]] >>>loss=tf.keras.losses.binary_crossentropy(y_true,y_pred) >>>assertloss.shape==(2,) >>>loss.numpy() array([0.916,0.714],dtype=float32) Arguments y_true:Groundtruthvalues.shape=[batch_size,d0,..dN]. y_pred:Thepredictedvalues.shape=[batch_size,d0,..dN]. from_logits:Whethery_predisexpectedtobealogitstensor.By default,weassumethaty_predencodesaprobabilitydistribution. label_smoothing:Floatin[0,1].If>0thensmooththelabelsby squeezingthemtowards0.5Thatis,using1.-0.5*label_smoothing forthetargetclassand0.5*label_smoothingforthenon-target class. axis:Theaxisalongwhichthemeaniscomputed.Defaultsto-1. Returns Binarycrossentropylossvalue.shape=[batch_size,d0,..dN-1]. [source] categorical_crossentropyfunction tf.keras.losses.categorical_crossentropy( y_true,y_pred,from_logits=False,label_smoothing=0.0,axis=-1 ) Computesthecategoricalcrossentropyloss. Standaloneusage: >>>y_true=[[0,1,0],[0,0,1]] >>>y_pred=[[0.05,0.95,0],[0.1,0.8,0.1]] >>>loss=tf.keras.losses.categorical_crossentropy(y_true,y_pred) >>>assertloss.shape==(2,) >>>loss.numpy() array([0.0513,2.303],dtype=float32) Arguments y_true:Tensorofone-hottruetargets. y_pred:Tensorofpredictedtargets. from_logits:Whethery_predisexpectedtobealogitstensor.By default,weassumethaty_predencodesaprobabilitydistribution. label_smoothing:Floatin[0,1].If>0thensmooththelabels.For example,if0.1,use0.1/num_classesfornon-targetlabels and0.9+0.1/num_classesfortargetlabels. axis:Defaultsto-1.Thedimensionalongwhichtheentropyis computed. Returns Categoricalcrossentropylossvalue. [source] sparse_categorical_crossentropyfunction tf.keras.losses.sparse_categorical_crossentropy( y_true,y_pred,from_logits=False,axis=-1,ignore_class=None ) Computesthesparsecategoricalcrossentropyloss. Standaloneusage: >>>y_true=[1,2] >>>y_pred=[[0.05,0.95,0],[0.1,0.8,0.1]] >>>loss=tf.keras.losses.sparse_categorical_crossentropy(y_true,y_pred) >>>assertloss.shape==(2,) >>>loss.numpy() array([0.0513,2.303],dtype=float32) >>>y_true=[[[0,2], ...[-1,-1]], ...[[0,2], ...[-1,-1]]] >>>y_pred=[[[[1.0,0.0,0.0],[0.0,0.0,1.0]], ...[[0.2,0.5,0.3],[0.0,1.0,0.0]]], ...[[[1.0,0.0,0.0],[0.0,0.5,0.5]], ...[[0.2,0.5,0.3],[0.0,1.0,0.0]]]] >>>loss=tf.keras.losses.sparse_categorical_crossentropy( ...y_true,y_pred,ignore_class=-1) >>>loss.numpy() array([[[2.3841855e-07,2.3841855e-07], [0.0000000e+00,0.0000000e+00]], [[2.3841855e-07,6.9314730e-01], [0.0000000e+00,0.0000000e+00]]],dtype=float32) Arguments y_true:Groundtruthvalues. y_pred:Thepredictedvalues. from_logits:Whethery_predisexpectedtobealogitstensor.By default,weassumethaty_predencodesaprobabilitydistribution. axis:Defaultsto-1.Thedimensionalongwhichtheentropyis computed. ignore_class:Optionalinteger.TheIDofaclasstobeignoredduring losscomputation.Thisisuseful,forexample,insegmentation problemsfeaturinga"void"class(commonly-1or255)insegmentation maps.Bydefault(ignore_class=None),allclassesareconsidered. Returns Sparsecategoricalcrossentropylossvalue. [source] poissonfunction tf.keras.losses.poisson(y_true,y_pred) ComputesthePoissonlossbetweeny_trueandy_pred. ThePoissonlossisthemeanoftheelementsoftheTensor y_pred-y_true*log(y_pred). Standaloneusage: >>>y_true=np.random.randint(0,2,size=(2,3)) >>>y_pred=np.random.random(size=(2,3)) >>>loss=tf.keras.losses.poisson(y_true,y_pred) >>>assertloss.shape==(2,) >>>y_pred=y_pred+1e-7 >>>assertnp.allclose( ...loss.numpy(),np.mean(y_pred-y_true*np.log(y_pred),axis=-1), ...atol=1e-5) Arguments y_true:Groundtruthvalues.shape=[batch_size,d0,..dN]. y_pred:Thepredictedvalues.shape=[batch_size,d0,..dN]. Returns Poissonlossvalue.shape=[batch_size,d0,..dN-1]. Raises InvalidArgumentError:Ify_trueandy_predhaveincompatibleshapes. [source] KLDivergenceclass tf.keras.losses.KLDivergence(reduction="auto",name="kl_divergence") ComputesKullback-Leiblerdivergencelossbetweeny_trueandy_pred. loss=y_true*log(y_true/y_pred) See:https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence Standaloneusage: >>>y_true=[[0,1],[0,0]] >>>y_pred=[[0.6,0.4],[0.4,0.6]] >>>#Using'auto'/'sum_over_batch_size'reductiontype. >>>kl=tf.keras.losses.KLDivergence() >>>kl(y_true,y_pred).numpy() 0.458 >>>#Callingwith'sample_weight'. >>>kl(y_true,y_pred,sample_weight=[0.8,0.2]).numpy() 0.366 >>>#Using'sum'reductiontype. >>>kl=tf.keras.losses.KLDivergence( ...reduction=tf.keras.losses.Reduction.SUM) >>>kl(y_true,y_pred).numpy() 0.916 >>>#Using'none'reductiontype. >>>kl=tf.keras.losses.KLDivergence( ...reduction=tf.keras.losses.Reduction.NONE) >>>kl(y_true,y_pred).numpy() array([0.916,-3.08e-06],dtype=float32) Usagewiththecompile()API: model.compile(optimizer='sgd',loss=tf.keras.losses.KLDivergence()) [source] kl_divergencefunction tf.keras.losses.kl_divergence(y_true,y_pred) ComputesKullback-Leiblerdivergencelossbetweeny_trueandy_pred. loss=y_true*log(y_true/y_pred) See:https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence Standaloneusage: >>>y_true=np.random.randint(0,2,size=(2,3)).astype(np.float64) >>>y_pred=np.random.random(size=(2,3)) >>>loss=tf.keras.losses.kullback_leibler_divergence(y_true,y_pred) >>>assertloss.shape==(2,) >>>y_true=tf.keras.backend.clip(y_true,1e-7,1) >>>y_pred=tf.keras.backend.clip(y_pred,1e-7,1) >>>assertnp.array_equal( ...loss.numpy(),np.sum(y_true*np.log(y_true/y_pred),axis=-1)) Arguments y_true:Tensoroftruetargets. y_pred:Tensorofpredictedtargets. Returns ATensorwithloss. Raises TypeError:Ify_truecannotbecasttothey_pred.dtype. Probabilisticlosses BinaryCrossentropyclass CategoricalCrossentropyclass SparseCategoricalCrossentropyclass Poissonclass binary_crossentropyfunction categorical_crossentropyfunction sparse_categorical_crossentropyfunction poissonfunction KLDivergenceclass kl_divergencefunction



請為這篇文章評分?