![]() ![]() Does this name conflict with any existing functions?.Would a user expect a function with this name to do what it does?.When considering new names I think natural questions are: It plans to implement swapaxes as an alternative transposition mechanism, so swapaxes and permute would work on both PyTorch tensors and NumPy-like arrays (and make PyTorch tensors more NumPy-like). PyTorch uses transpose for transpositions and permute for permutations. It would be helpful to provide library writers a mechanism to permute both NumPy-like arrays and PyTorch tensors.It is the correct mathematical name for the operation.This issue proposes a new function, permute, which is equivalent to transpose except it requires the permutation be specified. A “transposition,” however, is typically a swap of two elements, like what swapaxes does. The interesting point is that as you are using 2 rows of m to fill a colum of m_reshape, you are later on filling those “missing” colums with cat info creating this strange composition.Today in NumPy there’s transpose, which “reverses or permutes” an array’s axes. That dimension corresponds to cat and basically has same effect. Obviously when dog image is finished, this is, when you have already used m it takes moves to m to keep taking pixels. You are basically taking dogs to create the image on the top left and then dogs to create image in the top right. That’s why you can see that streching efect on the image. The problem is that in m, in dimension 3 you have 1100 elements, meanwhile in dimension 2 of m_reshape you have 2200 elements, so it actually takes 2 rows from m to fill a row of m_reshape. So it is filling the row 0 of m_reshape with colum pixels from m. So when you reshape it takes pixels from dimensions at the right and places them until filling new shape dimensions In this composition you have a BatchxImagesxrowsxcolums Ĭontiguous here mens 1-2, 2-3 even 1-2-3, but not 1-3 for example. If you want to reshape the ordering only remains for contiguous dimensions. You achieve what you want which is all the colums of image 1, all the colums of image 2 However if u properly order the dimensions So it takes the information of the image1, colum 1, then image2, colum 1 and so on. Here you are filling taking the info of one image and then the other because u set N at the right. If you permute and set dimensions before reshaping If u pay attention it 's resized to be fit in the desired shape What’s going on there? as you are reordering it’s getting the information in the original order which is, all colums of image 1, all rows of image 1, all colums of image 2, all rows of image 2 and so on. If you just reshape you get a wrong ordering The cat cropped looks like that (that’s grayscale) I’m converting RGB images to gray and croping to have same size Im2 = np.mean(skio.imread('/home/jfm/Downloads/cat.jpg'),axis=2) Im1 = np.mean(skio.imread('/home/jfm/Downloads/dog.jpg'),axis=2) So lets see what happens if you reshape vs permute + reshape vs permute without paying attention So an example about how to apply view could be the following oneĪnd these tensor contains B batches of N images whose size is HxW and you want to make a montage of these images in a single one concatanating in the colums your outgoing dimension would be That’s why this operation is different from 0 It takes numbers until it fills the dimensions. On the other hand, if you reshape you can see you are modifying the ordering because this is not rotating the cube but mapping in an ordered way from right to left. ![]() You are just rotating the tensor, but order is preserved ![]()
0 Comments
Leave a Reply. |