We should have dot(x::RowVector, y::RowVector) = dot(transpose(x), transpose(y))
(see mailing list). Mathematically, an inner product on a vector space induces an inner product on the dual space, so this seems unambiguous to me.
See also the discussion in #22220 about whether dot
should always be an inner product (i.e. produce a scalar). This ties into the alternate suggestion by @dlfivefifty that dot(a,b)
could be viewed as synonym for a'b
, in which case dot(x',y')
would be the outer product x*y'
.
(I'm skeptical of this proposal, as I explained on the mailing list, but it is a possibility.)
I don't think that's my proposal as I agree it should return a number. Perhaps dot(a,b) = vec(a)'vec(b)
.
Sorry for misunderstanding you. @dlfivefifty, vec(a)'vec(b)
is precisely what vecdot
is for. The problem with defining dot == vecdot
is that it is then inconsistent with norm
for matrices, since norm(::Matrix)
is the induced norm, not the Frobenius norm (vecnorm
).
In any case, if we agree that dot(::RowVector, ::RowVector)
should return a number, then there is only one number that it could reasonably return, and that is the (correction: conjugate of the) original dot product (from the dual of the dual).
Good point. for Vector
and RowVector
the matrix norm and vector norms are equivalent, and so I guess it's only these two cases where there is a natural inner product consistent with norm
.
there is only one number that it could reasonably return, and that is the original dot product
Can you explain more clearly why it should return dot(transpose(x), transpose(y))
rather than dot(adjoint(x), adjoint(y))
? Note that dot(transpose(x'), transpose(y'))
is the complex conjugate of dot(x, y)
for Vector
s x
and y
.
My definition indeed gives dot(x', y') == conj(dot(x, y))
. This is correct.
@garrison, one simple reason is that your suggested dot(adjoint(x), adjoint(y))
definition would violate the linearity property of inner products. You should always have dot(x, 伪*y) = 伪 * dot(x, y)
for any scalar 伪, but your definition would give conj(伪)
instead.
Another way to see it is to consider the case where x
and y
are scalars, where x'
is simply conj(x)
and our current definition already gives dot(x', y') == conj(dot(x, y))
.
How about dot(x,y) = dot(adjoint(y),adjoint(x))
Probably the most efficient thing (to be non-allocating in as many cases as possible) would be something like:
dot(x::RowVector, y::RowVector) = dot(transpose(x), transpose(y))
dot(x::RowVector{<:Number,<:ConjArray}, y::RowVector{<:Number,<:ConjArray}) = dot(adjoint(y), adjoint(x))
with a similar optimization for norm(x::RowVector)
.
I assume it will also make sense to have dot(::RowVector, ::AbstractVector)
and dot(::AbstractVector, ::RowVector)
, as well?
@garrison, I don't think so. AbstractVector
is a column vector, and we don't have a standard inner product between a vector and a dual vector (row vector). (Note that RowVector
is not a subtype of AbstractVector
.)
@stevengj @garrison
Hello,
I want like do this task. In which file must be added the dot-function?
greetings
@christianbender, probably in https://github.com/JuliaLang/julia/blob/master/base/linalg/rowvector.jl
@stevengj Since RowVector
is deprecated, shouldn't this issue be closed?
Yes it should be closed:
julia> a = rand(5)
5-element Array{Float64,1}:
0.027497381913224928
0.47043712559107065
0.22305578781443836
0.24976730756765697
0.27624822089770085
julia> dot(a',a')
0.41051786710273175
It hits the fallback dot
though:
julia> using BenchmarkTools, LinearAlgebra
julia> a = rand(ComplexF64, 1000); b = rand(ComplexF64, 1000);
julia> @btime dot($a, $b);
237.675 ns (0 allocations: 0 bytes)
julia> @btime dot($(a'), $(b'));
1.188 渭s (0 allocations: 0 bytes)
Most helpful comment
How about
dot(x,y) = dot(adjoint(y),adjoint(x))